DSL::Bulgarian

Computational workflows building by natural language commands in Bulgarian.

Raku DSL::Bulgarian

In brief

This Raku package facilitates the specification computational workflows using natural language commands in Bulgarian.

Using the Domain Specific Languages (DSLs) executable code is generated for different programming languages: Julia, Python, R, Raku, Wolfram Language.

Translation to other natural languages is also done: English, Korean, Russian, Spanish.

Data query (wrangling) workflows

Translate Bulgarian data wrangling specifications to different natural and programming languages:

use DSL::English::DataQueryWorkflows;

my $command = '
Π·Π°Ρ€Π΅Π΄ΠΈ Π΄Π°Π½Π½ΠΈΡ‚Π΅ iris;
Π²Π·Π΅ΠΌΠΈ Π΅Π»Π΅ΠΌΠ΅Π½Ρ‚ΠΈΡ‚Π΅ ΠΎΡ‚ 1 Π΄ΠΎ 120;
Ρ„ΠΈΠ»Ρ‚Ρ€ΠΈΡ€Π°ΠΉ Ρ‡Ρ€Π΅Π· Sepal.Width Π΅ ΠΏΠΎ-голямо ΠΎΡ‚ 2.4 ΠΈ Petal.Length Π΅ ΠΏΠΎ-ΠΌΠ°Π»ΠΊΠΎ ΠΎΡ‚ 5.5;
Π³Ρ€ΡƒΠΏΠΈΡ€Π°ΠΉ с ΠΊΠΎΠ»ΠΎΠ½Π°Ρ‚Π° Species;
ΠΏΠΎΠΊΠ°ΠΆΠΈ Ρ€Π°Π·ΠΌΠ΅Ρ€ΠΈΡ‚Π΅
';
for <English Python::pandas Raku::Reshapers Spanish Russian> -> $t {
   say '=' x 60, "\n", $t, "\n", '-' x 60;
   say ToDataQueryWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
# ============================================================
# English
# ------------------------------------------------------------
# load the data table: "iris"
# take elements from 1 to 120
# filter with the predicate: ((Sepal.Width greater than 2.4) ΠΈ (Petal.Length less than 5.5))
# group by the columns: Species
# show the count(s)
# ============================================================
# Python::pandas
# ------------------------------------------------------------
# obj = example_dataset('iris')
# obj = obj.iloc[1-1:120]
# obj = obj[((obj["Sepal.Width"]> 2.4) & (obj["Petal.Length"]< 5.5))]
# obj = obj.groupby(["Species"])
# print(obj.size())
# ============================================================
# Raku::Reshapers
# ------------------------------------------------------------
# my $obj = example-dataset('iris') ;
# $obj = $obj[ (1 - 1) ... (120 - 1 ) ] ;
# $obj = $obj.grep({ $_{"Sepal.Width"} > 2.4 and $_{"Petal.Length"} < 5.5 }).Array ;
# $obj = group-by($obj, "Species") ;
# say "counts: ", $obj>>.elems
# ============================================================
# Spanish
# ------------------------------------------------------------
# cargar la tabla: "iris"
# tomar los elementos de 1 a 120
# filtrar con la condicion: ((Sepal.Width mΓ‘s grande 2.4) y (Petal.Length menos 5.5))
# agrupar con columnas: "Species"
# mostrar recuentos
# ============================================================
# Russian
# ------------------------------------------------------------
# Π·Π°Π³Ρ€ΡƒΠ·ΠΈΡ‚ΡŒ Ρ‚Π°Π±Π»ΠΈΡ†Ρƒ: "iris"
# Π²Π·ΡΡ‚ΡŒ элСмСнты с 1 ΠΏΠΎ 120
# Ρ„ΠΈΠ»ΡŒΡ‚Ρ€ΠΎΠ²Π°Ρ‚ΡŒ с ΠΏΡ€Π΅Π΄ΠΈΠΊΠ°Ρ‚ΠΎΠΌ: ((Sepal.Width большС 2.4) ΠΈ (Petal.Length мСньшС 5.5))
# Π³Ρ€ΡƒΠΏΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ с ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°ΠΌΠΈ: Species
# ΠΏΠΎΠΊΠ°Π·Π°Ρ‚ΡŒ число

Classification workflows

use DSL::English::ClassificationWorkflows;

my $command = '
ΠΈΠ·ΠΏΠΎΠ»Π·Π²Π°ΠΉ dfTitanic;
Ρ€Π°Π·Π΄Π΅Π»ΠΈ Π΄Π°Π½Π½ΠΈΡ‚Π΅ с Ρ†Π΅ΠΏΠ΅Ρ‰ΠΎ ΡΡŠΠΎΡ‚Π½ΠΎΡˆΠ΅Π½ΠΈΠ΅ 0.82;
Π½Π°ΠΏΡ€Π°Π²ΠΈ gradient boosted trees класификатор;
ΠΏΠΎΠΊΠ°ΠΆΠΈ TruePositiveRate ΠΈ FalsePositiveRate;
';

for <English Russian WL::ClCon> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToClassificationWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
# ============================================================
# English
# ------------------------------------------------------------
# use the data: dfTitanic
# split into training and testing data with the proportion 0.82
# train classifier with method: gradient boosted trees
# ============================================================
# Russian
# ------------------------------------------------------------
# ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Π΄Π°Π½Π½Ρ‹Π΅: dfTitanic
# Ρ€Π°Π·Π΄Π΅Π»ΠΈΡ‚ΡŒ Π΄Π°Π½Π½Ρ‹Π΅ Π½Π° ΠΏΡ€ΠΎΠΏΠΎΡ€Ρ†ΠΈΡŽ 0.82
# ΠΎΠ±ΡƒΡ‡ΠΈΡ‚ΡŒ классификатор ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΎΠΌ: gradient boosted trees
# ============================================================
# WL::ClCon
# ------------------------------------------------------------
# ClConUnit[ dfTitanic ] \[DoubleLongRightArrow]
# ClConSplitData[ 0.82 ] \[DoubleLongRightArrow]
# ClConMakeClassifier[ "GradientBoostedTrees" ] \[DoubleLongRightArrow]
# ClConClassifierMeasurements[ {"Recall", "FalsePositiveRate"} ] \[DoubleLongRightArrow] ClConEchoValue[]

Latent Semantic Analysis

use DSL::English::LatentSemanticAnalysisWorkflows;

my $command = '
създай със textHamlet;
Π½Π°ΠΏΡ€Π°Π²ΠΈ Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚-Ρ‚Π΅Ρ€ΠΌΠΈΠ½ ΠΌΠ°Ρ‚Ρ€ΠΈΡ†Π° със Π°Π²Ρ‚ΠΎΠΌΠ°Ρ‚ΠΈΡ‡Π½ΠΈ стоп Π΄ΡƒΠΌΠΈ;
ΠΏΡ€ΠΈΠ»ΠΎΠΆΠΈ LSI Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈΡ‚Π΅ IDF, TermFrequency, ΠΈ Cosine;
ΠΈΠ·Π²Π°Π΄ΠΈ 12 Ρ‚Π΅ΠΌΠΈ Ρ‡Ρ€Π΅Π· NNMF ΠΈ максималСн Π±Ρ€ΠΎΠΉ ΡΡ‚ΡŠΠΏΠΊΠΈ 12;
ΠΏΠΎΠΊΠ°ΠΆΠΈ Ρ‚Π°Π±Π»ΠΈΡ†Π°  Π½Π° Ρ‚Π΅ΠΌΠΈΡ‚Π΅ с 12 Ρ‚Π΅Ρ€ΠΌΠΈΠ½Π°;
ΠΏΠΎΠΊΠ°ΠΆΠΈ Ρ‚Π΅ΠΊΡƒΡ‰Π°Ρ‚Π° Π»Π΅Π½Ρ‚ΠΎΠ²Π° стойност
';

for <English Python::LSAMon R::LSAMon Russian> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToLatentSemanticAnalysisWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
#ERROR: Possible misspelling of 'Ρ‚Π΅Ρ€ΠΌΠΈΠ½ΠΈ' as 'Ρ‚Π΅Ρ€ΠΌΠΈΠ½Π°'.
#ERROR: Possible misspelling of 'Ρ‚Π΅Ρ€ΠΌΠΈΠ½ΠΈ' as 'Ρ‚Π΅Ρ€ΠΌΠΈΠ½Π°'.
#ERROR: Possible misspelling of 'Ρ‚Π΅Ρ€ΠΌΠΈΠ½ΠΈ' as 'Ρ‚Π΅Ρ€ΠΌΠΈΠ½Π°'.
#ERROR: Possible misspelling of 'Ρ‚Π΅Ρ€ΠΌΠΈΠ½ΠΈ' as 'Ρ‚Π΅Ρ€ΠΌΠΈΠ½Π°'.
# ============================================================
# English
# ------------------------------------------------------------
# create LSA object with the data: textHamlet
# make the document-term matrix with the parameters: use the stop words: NULL
# apply the latent semantic analysis (LSI) functions: global weight function : "IDF", local weight function : "None", normalizer function : "Cosine"
# extract 12 topics using the parameters: method : Non-Negative Matrix Factorization (NNMF), max number of steps : 12
# show topics table using the parameters: numberOfTerms = 12
# show the pipeline value
# ============================================================
# Python::LSAMon
# ------------------------------------------------------------
# LatentSemanticAnalyzer(textHamlet).make_document_term_matrix( stop_words = None).apply_term_weight_functions(global_weight_func = "IDF", local_weight_func = "None", normalizer_func = "Cosine").extract_topics(number_of_topics = 12, method = "NNMF", max_steps = 12).echo_topics_table(numberOfTerms = 12).echo_value()
# ============================================================
# R::LSAMon
# ------------------------------------------------------------
# LSAMonUnit(textHamlet) %>%
# LSAMonMakeDocumentTermMatrix( stopWords = NULL) %>%
# LSAMonApplyTermWeightFunctions(globalWeightFunction = "IDF", localWeightFunction = "None", normalizerFunction = "Cosine") %>%
# LSAMonExtractTopics( numberOfTopics = 12, method = "NNMF",  maxSteps = 12) %>%
# LSAMonEchoTopicsTable(numberOfTerms = 12) %>%
# LSAMonEchoValue()
# ============================================================
# Russian
# ------------------------------------------------------------
# ΡΠΎΠ·Π΄Π°Ρ‚ΡŒ Π»Π°Ρ‚Π΅Π½Ρ‚Π½Ρ‹ΠΉ сСмантичСский Π°Π½Π°Π»ΠΈΠ·Π°Ρ‚ΠΎΡ€ с Π΄Π°Π½Π½Ρ‹Ρ…: textHamlet
# ΡΠ΄Π΅Π»Π°Ρ‚ΡŒ ΠΌΠ°Ρ‚Ρ€ΠΈΡ†Ρƒ Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚ΠΎΠ²-Ρ‚Π΅Ρ€ΠΌΠΈΠ½ΠΎΠ² с ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Π°ΠΌΠΈ: стоп-слова: null
# ΠΏΡ€ΠΈΠΌΠ΅Π½ΡΡ‚ΡŒ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ Π»Π°Ρ‚Π΅Π½Ρ‚Π½ΠΎΠ³ΠΎ сСмантичСского индСксирования (LSI): глобальная вСсовая функция: "IDF", локальная вСсовая функция: "None", Π½ΠΎΡ€ΠΌΠ°Π»ΠΈΠ·ΡƒΡŽΡ‰Π°Ρ функция: "Cosine"
# ΠΈΠ·Π²Π»Π΅Ρ‡ΡŒ 12 Ρ‚Π΅ΠΌ с ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Π°ΠΌΠΈ: ΠΌΠ΅Ρ‚ΠΎΠ΄: Π Π°Π·Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΠ΅ΠΎΡ‚Ρ€ΠΈΡ†Π°Ρ‚Π΅Π»ΡŒΠ½Ρ‹Ρ… ΠœΠ°Ρ‚Ρ€ΠΈΡ‡Π½Ρ‹Ρ… Π€Π°ΠΊΡ‚ΠΎΡ€ΠΎΠ² (NNMF), максимальноС число шагов: 12
# ΠΏΠΎΠΊΠ°Π·Π°Ρ‚ΡŒ Ρ‚Π°Π±Π»ΠΈΡ†Ρƒ Ρ‚Π΅ΠΌΡ‹ ΠΏΠΎ ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Π°ΠΌ: numberOfTerms = 12
# ΠΏΠΎΠΊΠ°Π·Π°Ρ‚ΡŒ Ρ‚Π΅ΠΊΡƒΡ‰Π΅Π΅ Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ ΠΊΠΎΠ½Π²Π΅ΠΉΠ΅Ρ€Π°

Quantile Regression Workflows

use DSL::English::QuantileRegressionWorkflows;

my $command = '
създай с dfTemperatureData;
ΠΏΡ€Π΅ΠΌΠ°Ρ…Π½ΠΈ липсващитС стойности;
ΠΏΠΎΠΊΠ°ΠΆΠΈ Π΄Π°Π½Π½ΠΎΠ²ΠΎ ΠΎΠ±ΠΎΠ±Ρ‰Π΅Π½ΠΈΠ΅;
ΠΏΡ€Π΅ΠΌΠ°Ρ‰Π°Π±ΠΈΡ€Π°ΠΉ Π΄Π²Π΅Ρ‚Π΅ оси;
изчисли ΠΊΠ²Π°Π½Ρ‚ΠΈΠ»Π½Π° рСгрСсия с 20 възСла ΠΈ вСроятности ΠΎΡ‚ 0.1 Π΄ΠΎ 0.9 със ΡΡ‚ΡŠΠΏΠΊΠ° 0.1;
ΠΏΠΎΠΊΠ°ΠΆΠΈ Π΄ΠΈΠ°Π³Ρ€Π°ΠΌΠ° с Π΄Π°Ρ‚ΠΈ;
ΠΏΠΎΠΊΠ°ΠΆΠΈ Ρ‡Π΅Ρ€Ρ‚Π΅ΠΆ Π½Π° Π°Π±ΡΠΎΠ»ΡŽΡ‚Π½ΠΈΡ‚Π΅ Π³Ρ€Π΅ΡˆΠΊΠΈ;
ΠΏΠΎΠΊΠ°ΠΆΠΈ Ρ‚Π΅ΠΊΡƒΡ‰Π°Ρ‚Π° Π»Π΅Π½Ρ‚ΠΎΠ²Π° стойност
';

for <English R::QRMon Russian WL::QRMon> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToQuantileRegressionWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
#ERROR: Possible misspelling of 'възли' as 'възСла'.
#ERROR: Possible misspelling of 'възли' as 'възСла'.
#ERROR: Possible misspelling of 'възли' as 'възСла'.
#ERROR: Possible misspelling of 'възли' as 'възСла'.
# ============================================================
# English
# ------------------------------------------------------------
# create quantile regression object with the data: dfTemperatureData
# delete missing values
# show data summary
# rescale: over both regressor and value axes
# compute quantile regression with parameters: degrees of freedom (knots): 20, automatic probabilities
# show plot with parameters: use date axis
# show plot of relative errors
# show the pipeline value
# ============================================================
# R::QRMon
# ------------------------------------------------------------
# QRMonUnit( data = dfTemperatureData) %>%
# QRMonDeleteMissing() %>%
# QRMonEchoDataSummary() %>%
# QRMonRescale(regressorAxisQ = TRUE, valueAxisQ = TRUE) %>%
# QRMonQuantileRegression(df = 20, probabilities = seq(0.1, 0.9, 0.1)) %>%
# QRMonPlot( datePlotQ = TRUE) %>%
# QRMonErrorsPlot( relativeErrorsQ = TRUE) %>%
# QRMonEchoValue()
# ============================================================
# Russian
# ------------------------------------------------------------
# ΡΠΎΠ·Π΄Π°Ρ‚ΡŒ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ ΠΊΠ²Π°Π½Ρ‚ΠΈΠ»ΡŒΠ½ΠΎΠΉ рСгрСссии с Π΄Π°Π½Π½Ρ‹ΠΌΠΈ: dfTemperatureData
# ΡƒΠ΄Π°Π»ΠΈΡ‚ΡŒ ΠΏΡ€ΠΎΠΏΡƒΡ‰Π΅Π½Π½Ρ‹Π΅ значСния
# ΠΏΠΎΠΊΠ°Π·Π°Ρ‚ΡŒ сводку Π΄Π°Π½Π½Ρ‹Ρ…
# ΠΏΠ΅Ρ€Π΅ΠΌΠ°ΡΡˆΡ‚Π°Π±ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ: ΠΏΠΎ осям рСгрСссии ΠΈ Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ
# Ρ€Π°ΡΡΡ‡ΠΈΡ‚Π°Ρ‚ΡŒ ΠΊΠ²Π°Π½Ρ‚ΠΈΠ»ΡŒΠ½ΡƒΡŽ Ρ€Π΅Π³Ρ€Π΅ΡΡΠΈΡŽ с ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Π°ΠΌΠΈ: стСпСни свободы (ΡƒΠ·Π»Ρ‹): 20, автоматичСскими вСроятностями
# ΠΏΠΎΠΊΠ°Π·Π°Ρ‚ΡŒ Π΄ΠΈΠ°Π³Ρ€Π°ΠΌΠΌΡƒ с ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Π°ΠΌΠΈ: использованиСм оси Π΄Π°Ρ‚
# ΠΏΠΎΠΊΠ°Π·Π°Ρ‚ΡŒ Π΄ΠΈΠ°Π³Ρ€Π°ΠΌΡƒ Π½Π° ΠΎΡ‚Π½ΠΎΡΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹Ρ… ошибок
# ΠΏΠΎΠΊΠ°Π·Π°Ρ‚ΡŒ Ρ‚Π΅ΠΊΡƒΡ‰Π΅Π΅ Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ ΠΊΠΎΠ½Π²Π΅ΠΉΠ΅Ρ€Π°
# ============================================================
# WL::QRMon
# ------------------------------------------------------------
# QRMonUnit[dfTemperatureData] \[DoubleLongRightArrow]
# QRMonDeleteMissing[] \[DoubleLongRightArrow]
# QRMonEchoDataSummary[] \[DoubleLongRightArrow]
# QRMonRescale["Axes"->{True, True}] \[DoubleLongRightArrow]
# QRMonQuantileRegression["Knots" -> 20, "Probabilities" -> Range[0.1, 0.9, 0.1]] \[DoubleLongRightArrow]
# QRMonDateListPlot[] \[DoubleLongRightArrow]
# QRMonErrorPlots[ "RelativeErrors" -> True] \[DoubleLongRightArrow]
# QRMonEchoValue[]

Recommender workflows

use DSL::English::RecommenderWorkflows;

my $command = '
създай Ρ‡Ρ€Π΅Π· dfTitanic;
ΠΏΡ€Π΅ΠΏΠΎΡ€ΡŠΡ‡Π°ΠΉ със ΠΏΡ€ΠΎΡ„ΠΈΠ»Π° "male" ΠΈ "died";
ΠΏΠΎΠΊΠ°ΠΆΠΈ Ρ‚Π΅ΠΊΡƒΡ‰Π°Ρ‚Π° Π»Π΅Π½Ρ‚ΠΎΠ²Π° стойност
';

for <English Python::SMRMon R::SMRMon Russian> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToRecommenderWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
# ============================================================
# English
# ------------------------------------------------------------
# create with data table: dfTitanic
# recommend with the profile: ["male", "died"]
# show the pipeline value
# ============================================================
# Python::SMRMon
# ------------------------------------------------------------
# obj = SparseMatrixRecommender().create_from_wide_form(data = dfTitanic).recommend_by_profile( profile = ["male", "died"]).echo_value()
# ============================================================
# R::SMRMon
# ------------------------------------------------------------
# SMRMonCreate(data = dfTitanic) %>%
# SMRMonRecommendByProfile( profile = c("male", "died")) %>%
# SMRMonEchoValue()
# ============================================================
# Russian
# ------------------------------------------------------------
# ΡΠΎΠ·Π΄Π°Ρ‚ΡŒ с Ρ‚Π°Π±Π»ΠΈΡ†Ρƒ: dfTitanic
# Ρ€Π΅ΠΊΠΎΠΌΠ΅Π½Π΄ΡƒΠΉ с ΠΏΡ€ΠΎΡ„ΠΈΠ»ΡŽ: ["male", "died"]
# ΠΏΠΎΠΊΠ°Π·Π°Ρ‚ΡŒ Ρ‚Π΅ΠΊΡƒΡ‰Π΅Π΅ Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ ΠΊΠΎΠ½Π²Π΅ΠΉΠ΅Ρ€Π°

Implementation notes

The rules in the file "DataQueryPhrases.rakumod" are derived from file "DataQueryPhrases-template" using the package "Grammar::TokenProcessing" , [AAp3].

In order to have Bulgarian commands parsed and interpreted into code the steps taken were split into four phases:

  1. Utilities preparation

  2. Bulgarian words and phrases addition and preparation

  3. Preliminary functionality experiments

  4. Packages code refactoring

Utilities preparation

Since the beginning of the work on translation of the computational DSLs into programming code it was clear that some the required code transformations have to be automated.

While doing the preparation work -- and in general, while the DSL-translation work matured -- it became clear that there are several directives to follow:

  1. Make and use Command Line Interface (CLI) scripts that do code transformation or generation.

  2. Adhere to of the Eric Raymond's 17 Unix Rules, [Wk1]:

    • Make data complicated when required, not the program

    • Write abstract programs that generate code instead of writing code by hand

In order to facilitate the "from Bulgarian" project the package "Grammar::TokenProcessing", [AAp3], was "finalized." The initial versions of that package were used from the very beginning of the DSLs grammar development in order to facilitate handling of misspellings.

(Current) recipe

This sub-section lists the steps for endowing a certain already developed workflows DSL package with Bulgarian translations.

Denote the DSL workflows we focus on as DOMAIN (workflows.) For example, DOMAIN can stand for DataQueryWorkflows, or RecommenderWorkflows.

Remark: In the recipe steps below DOMAIN would be DataQueryWorkflows

It is assumed that:

  • DOMAIN in English are already developed.

  • Since both English and Bulgarian are analytical, non-agglutinative languages "just" replacing English words with Bulgarian words in DOMAIN would produce good enough parsers of Bulgarian.

Here are the steps:

  1. Add global Bulgarian words (optional)

    1. Add Bulgarian words and phrases in the DSL::Shared file "Roles/Bulgarian/CommonSpeechParts-template".

    2. Generate the file Roles/Bulgarian/CommonSpeechParts.rakumod using the CLI script AddFuzzyMatching

    3. Consider translating, changing, or refactoring global files, like, Roles/English/TimeIntervalSpec.rakumod

  2. Translate DOMAIN English words and phrases into Bulgarian

    1. Take the file DOMAIN/Grammar/DOMAIN-template and translate its words into Bulgarian

  3. Add the corresponding files into DSL::Bulgarian, [AAp1].

    1. Use the DOMAIN/Grammarish.rakumod role.

      • The English DOMAIN package should have such rule. If do not do the corresponding code refactoring.

    2. Test with implemented DOMAIN languages.

    3. See the example grammar and role in DataQueryWorkflows in DSL::Bulgarian.

References

Articles

[AA1] Anton Antonov, "Introduction to data wrangling with Raku", (2021), RakuForPrediction at WordPress.

[Wk1] Wikipedia entry, UNIX-philosophy rules.

Packages

[AAp1] Anton Antonov, DSL::Bulgarian, Raku package, (2022), GitHub/antononcube.

[AAp2] Anton Antonov, DSL::Shared, Raku package, (2018-2022), GitHub/antononcube.

[AAp3] Anton Antonov, Grammar::TokenProcessing, Raku project (2022), GitHub/antononcube.

[AAp4] Anton Antonov, DSL::English::ClassificationWorkflows, Raku package, (2018-2022), GitHub/antononcube.

[AAp5] Anton Antonov, DSL::English::DataQueryWorkflows, Raku package, (2020-2022), GitHub/antononcube.

[AAp6] Anton Antonov, DSL::English::LatentSemanticAnalysisWorkflows, Raku package, (2018-2022), GitHub/antononcube.

[AAp7] Anton Antonov, DSL::English::QuantileRegressionWorkflows, Raku package, (2018-2022), GitHub/antononcube.

[AAp8] Anton Antonov, DSL::English::QuantileRegressionWorkflows, Raku package, (2018-2022), GitHub/antononcube.

DSL::Bulgarian v0.1.0

Computational workflows building by natural language commands in Bulgarian.

Authors

  • Anton Antonov

License

GPL-3.0-or-later

Dependencies

DSL::Shared:ver<0.1.2+>DSL::English::ClassificationWorkflows:<0.8.0+>DSL::English::DataQueryWorkflows:ver<0.5.9+>DSL::English::LatentSemanticAnalysisWorkflows:ver<0.8.0+>DSL::English::QuantileRegressionWorkflows:<0.8.0+>DSL::English::RecommenderWorkflows:<0.8.0+>

Test Dependencies

Provides

  • DSL::Bulgarian::ClassificationWorkflows::Grammar
  • DSL::Bulgarian::ClassificationWorkflows::Grammar::ClassificationPhrases
  • DSL::Bulgarian::DataQueryWorkflows::Grammar
  • DSL::Bulgarian::DataQueryWorkflows::Grammar::DataQueryPhrases
  • DSL::Bulgarian::LatentSemanticAnalysisWorkflows::Grammar
  • DSL::Bulgarian::LatentSemanticAnalysisWorkflows::Grammar::LatentSemanticAnalysisPhrases
  • DSL::Bulgarian::QuantileRegressionWorkflows::Grammar
  • DSL::Bulgarian::QuantileRegressionWorkflows::Grammar::TimeSeriesAndRegressionPhrases
  • DSL::Bulgarian::RecommenderWorkflows::Grammar
  • DSL::Bulgarian::RecommenderWorkflows::Grammar::RecommenderPhrases

The Camelia image is copyright 2009 by Larry Wall. "Raku" is trademark of the Yet Another Society. All rights reserved.