Today is the last day to register an intent to submit!
Until Midnight Eastern Time!
See you on: https://ifcomp.org/ !!
Okay, added a few more features and I'm really liking the results. I added the `-` operator for `dice_groups` which was a bit more tricky than I thought it'd be. I also added the ability to roll a dice group multiple times via a `x`|`*` operator. I removed the requirement that a dice group has to have a damage type associated with it since the only way you can roll something like `1d4-1` is in a dice group since there is a minimum value of 1 associated with it.
The code lives here: https://codeberg.org/JamesTheBard/dice-roller
Okay, _now_ I'm done. Fixed a few parser errors and implemented a skew option (the `^` value) that will push the average value towards either 1 or the maximum of the dice. I like that once you get the parser up and running that it's easy to add stuff to it. I now am the official owner of a completely overkill dice running program.
For the example below, the skew is `2.0`. The random value is raised to the `1/skew` power before being multiplied by the number of sides of the die. If skew goes up, so do the results. If skew goes down, well, so do the results.
```
$ python main.py "^2 (2d6+2d8+12)[fire]+1d8[piercing]" | jq .
{
"results": {
"fire": 37,
"piercing": 7
}
}
```
Before starting the big work of porting my experimental #LiveCoding #uzulang #godwit 's #parser from #Alex and #Happy to #Parsec, I finally managed to get a snapshot working in #Termux on my phone by making a tarball including the generated #Haskell files on a machine where Alex and Happy were available and transferring it over network.
This isn't ideal as I can't edit the parser on my phone (no Alex/Happy there; at least not recent enough versions, old ones got via cjacker's Hugs 2019 improvement), so I still want to switch to Parsec which is not so horribly #GHC -only (Parsec has a version that works with #MicroHs and probably #Hugs #HaskellHugs too).
I put the snapshot at
https://mathr.co.uk/web/godwit.html#Download and meanwhile updated the bootstrap script as patching MicroHs isn't necessary any more.
mdq - by yshavit
https://github.com/yshavit/mdq
like #jq but for #Markdown: find specific elements in an md doc
Also available as a crate:
https://docs.rs/mdq/latest/mdq/
Tagging @wader
Any #TOML nerds? Would you say this is valid TOML? (I'm building a parser.)
```
inline = { array = [ 1,
2 ] }
```
The spec says "No newlines are allowed between the curly braces unless they are valid within a value."
This includes multi-line strings, but do you interpret it to permit newlines in an array? The array itself is technically a single value inside which newlines are valid. It is obviously not "in the spirit" of inline tables but the ABNF grammar allows it.
QapGen: Создаём мощные парсеры на C++
QapDSLv2 — это язык который транслируется в обычный C++ код. Он позволяет удобно и компактно задавать грамматики/правила разбора кода программ, значительно упрощая разработку компиляторов/анализаторов/трансляторов. QapGen — это генератор дерева_лексеров /парсеров описанных на QapDSLv2. Сама грамматика QapDSLv2 описана на QapDSLv2 на 100%. Поэтому QapGen как основной читатель этой грамматики сам генерирует часть своего кода(весь парсер QapDSLv2). Основные фишки QapDSLv2 + QapGen — это: 1) Отсутствие этапа токенизации — дерево лексеров разбивает входной поток на лексемы и сохраняет их в строго типизированных древовидных С++ структурах пропуская этап токенизации. 2) Генерация оптимизированного кода полиморфных лексеров . 3) Полное сохранение всех лексем(даже разделители сохраняются, такие как пробелы/переходы на новую строку и комментарии) в результирующем дереве. 4) Возможность сохранить как оригинальное дерево , так и модифицированное обратно в код/текст без потери разделителей/комментариев . 5) Автоматическая генерация кода посетителей (это такой паттерн проектирования). А теперь пример самой сочной части(рекурсивно самоописывающийся код): struct t_target_struct:i_target_item{ struct t_keyword{ string kw=any_str_from_vec(split("struct,class",",")); " "? // optional separator }; struct t_body_semicolon:i_struct_impl{";"}; struct t_body_impl:i_struct_impl{ "{" // жрём скобочку vector<TAutoPtr<i_target_item>> nested?; // рекурсия! " "? vector<TAutoPtr<i_struct_field>> arr?; // парсим поля " "? TAutoPtr<t_cpp_code> c?; // остальной С++ код " "? "}" }; struct t_parent{ string a_or_c=any_str_from_vec(split("=>,:",",")); " "? t_name name; }; //точка входа в парсер: TAutoPtr<t_keyword> kw?; //парсим struct/class t_name name; //парсим имя " "? TAutoPtr<t_parent> parent?; " "? TAutoPtr<i_struct_impl> body; };
https://habr.com/ru/articles/925420/
#parser #parsergenerator #lexers #c++ #tree #ast #gamedev #dsl #compiler
In which I have Opinions about parsing and grammars - by Simon Tatham
https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/parsing/
The tiny but complete #JSON #parser that I wrote in #Haskell years ago is now featured in the 200-and-change collection of programs by @dubroy: https://pdubroy.github.io/200andchange
#Development #Launches
ESLint can now lint HTML · The code linter delivers a new language plugin https://ilo.im/163v4b
_____
#ESLint #OpenSource #Coding #Linter #Parser #HTML #Npm #WebDev #Frontend
P : yml -> nix & yml -> python
T : yml -> python
#parser
I finally got around to wrapping up and publishing a first version of my #Rust crate ts-typed-ast. It's a crate inspired by Rowan that automatically generates a typed AST from a tree-sitter grammar. You can find it here: https://crates.io/crates/ts-typed-ast
It works similarly to Rowan and Swift's libsyntax. tree-sitter provides the green nodes, while this crate generates the red nodes.
I've used it a few times already, to prototype various toy programming languages. You write a grammar in tree-sitter, and then either evaluate the ts-typed-ast tree directly, or convert it to some other IR.
Using tree-sitter as the parser generator for a toy project is pretty nice. You get a powerful, declarative way to create a parser, and at the same time you benefit from the whole tree-sitter ecosystem. Things like incremental parsing, syntax highlighting, structural editing, and formatting with Topiary.
Main downside is that tree-sitter does not (yet) offer good error reporting and recovery, so when parsing fails it's often in dramatic, unhelpful ways. Not a big issue for experimenting, which is what this crate is for. Production-ready languages probably need bespoke parsers anyway.
Red Green Syntax Trees - an Overview | by Will Speak (aka Plingdollar):
https://willspeak.me/2021/11/24/red-green-syntax-trees-an-overview.html
To the #Rust #rustlang community: I once started to write a #compiler / #parser with #nom #crate. However I had some struggles with it of how to provide **multiple** errors with line+column indicators for a parsed context.
Now I read a tiny bit about #syn and #chumsky
Are they the right crates for me? Are there others?
I do not parse rust-code, but a completely custom language (similar to pugjs)
(Boost for reach )
#lispyGopherClimate #lisp #programming #podcast #live Wednesday 0UTC https://archives.anonradio.net/202503050000_screwtape.mp3
#climateCrisis #haiku and #risk #inequality #essay by @kentpitman
https://netsettlement.blogspot.com/2013/08/lien-times-for-startups.html
#libre #archive update from @hairylarry https://gamerplus.org/@hairylarry/114106383066762290
https://www.european-lisp-symposium.org/2025/index.html
#ELS2025 submissions extended to Sunday. #LaTeX #ACM #primer / past #proceedings
Notes from my first #language #parser #commonLisp #mcclim #chess
If there are guests, there are guests(?)
An example of "ics describe" command output from Guile-ICS 0.7.0.