If yaml didn’t have anchors and 8 different white space formats, it’d be a great replacement for this kind of thing.
But yaml is a mess, and you’d think you could parse it easily, but you can’t.
YAML is redeemed by one thing only:
All JSON is valid YAML.
Yup! YAML is defined as a “strict superset” of JSON (or at least, it was the last time I checked".
It’s a lot like markdown and HTML; when you want to write something deeply structured and somewhat complex you can always drop back/down to the format with explicit closing delimiters and it just works™.
As someone who works with YAML regularly:
Fuck YAML.
Nah, yaml isn’t great by virtue of itself but with what it competes with. I far, far prefer it to any other BS. Json is garbage for human creation and obviously toml due to above. XML… Obviously just for machines.
Again, only good because of its competition.
I think edn is almost the only more advanced and ergonomic option to json. Edn is like the evolved json, but its interesting that its roots are way older than JSON.
The fact that you can very efficiently define whole applications and software just with edn (and the lisp syntax in general) is what makes really amazing.
I think this blog post sheds more light on how we only need lisp for defining data and applications.
Because the 3rd panel looks better when you have dozens of physical properties to track. It also makes retrieval easier because you can get all the physical properties at once, instead of having to read every line.
For an example that small it doesn’t matter, but for something larger it could become a performance benefit.
A good way to feel that for yourself is by programming a little program in Assembly and C.
Make sure the program needs to loop a bit and perhaps also require some if/ else logic.
A simple one would be to read a 1000 integers and return the sum.
In C, you would do something like:
int MAX = 1000; int accumulator = 0; int counter = 0; while (counter < MAX) { accumulator = accumulator + value_at_next_memory_location_by_counter; counter = counter + 1; }In assembly, you would go (writing pseudo, because I have forgotten most assembly stuff):
set reg1 = 1000 // For max value set accumulator = 0 // just choose a register and consider it an accumulator. older CPUs have a fixed accumulator and you can only operate on that. I am not considering that here set reg2 = 0 // For counter tag LOOP: set flag if true reg2 < reg1 jump if false -> END move from memory location @counter(reg2) to reg3 add accumulator reg3 add reg2 1 goto -> LOOP tag END:I also realised that you could just try using C with goto instead of any loops and would realise similar things, but I’m not in the mood to rewrite my comment.
In conclusion, it is easier to understand something like BASIC, if you haven’t been introduced to other languages, but these {} structures end up making it easier to catch control flows at a glance.
That’s also the argument I use when telling people to have opening and closing brackets of the same level at the same indent, while people prefer stuff like:
For programming languages that make use of {}, the reason is (almost always) scope.
Take for instance this:
for i in 0..10 do_thing(); do_other_thing();compared to this:
for i in 0..10 { do_thing(); } do_other_thing();In the second one it’s clear you should loop do_thing() and then run do_other_thing() afterwards. The indentation is only for readability in the above though. Languages that use indentation for scope look more similar to
for i in 0..10: do_thing() do_other_thing()since that shit makes it harder to read
It makes it harder to read the individual lines, but makes it easier to read them as a group, so you won’t have to read as many lines on your day to day.
TOML’s design is based on the idea that INI was a good format. This was always going to cause problems, as INI was never good, and never a format. In reality, it was hundreds of different formats people decided to use the same file extension for, all with their own incompatible quirks and rarely any ability to identify which variant you were using and therefore which quirks would need to be worked around.
The changes in the third panel were inevitable, as people have data with nested structure that they’re going to want to represent, and without significant whitespace, TOML was always going to need some kind of character to delimit nesting.
Well, Wikipedia does say:
The [TOML] project standardizes the implementation of the ubiquitous INI file format (which it has largely supplanted[citation needed]), removing ambiguity from its interpretation.
It’s basically just JSON that can generate itself !
You have inspired me.
I will make JSON with meta-programming
I will call it DyJSON, i.e. “Dynamic JSON” but pronounced “Die, Jason!”
It is JSON with meta-programming and the ability to call C functions from libraries
Example:
# This is a line comment # Put your function definitions up here (concat str_a str_b: "concat" "my-lib.so") # Import a function through a C ABI (make-person first_name last_name email -> { # Define our own generative func "name": (concat (concat $first_name " ") $last_name), "email": $email }) # And then the JSON part which uses them [ (make-person "Jenny" "Craig" "[email protected]"), (make-person "Parson" "Brown" null) ]As you can see, it is also a LISP to some degree
Is there a need for this? A purpose? No. But some things simply should exist
Thank you for helping bring this language into existence
Here is the grammar:
<json> ::= <value> | <fn-def> <json> <value> ::= <object> | <array> | <string> | <number> | <bool> | <fn-def> | <fn-app> | "null" <object> ::= "{" [ <member> { "," <member> } ] "}" <member> ::= <string> ":" <value> <string> ::= "\"" { <char> } "\"" <char> ::= (ASCII other than "\"", "\\", 0-31, 127-159) | (Unicode other than ASCII) | ( "\\" ( "\"" | "\\" | "/" | "b" | "f" | "n" | "r" | "t" | "u" <hex> <hex> <hex> <hex> ) <hex> ::= /A-Fa-f0-9/ <array> ::= "[" [ <value> { "," <value> } ] "]" <number> ::= <integer> [ <fraction> ] [ <exponent> ] <integer> ::= "0" | /[1-9]+/ | "-" <integer> <fractional> ::= "." /[0-9]+/ <exponent> ::= ("E" | "e") [ "-" | "+" ] /[0-9]+/ <bool> ::= "true" | "false" <fn-def> ::= "(" <ident> { <ident> } ("->" <value> | ":" <string> <string>) ")" <ident> ::= <startc> { <identc> } <startc> ::= /A-Za-z_/ or non-ASCII Unicode <identc> ::= <startc> | /[0-9-]/ <fn-app> ::= "(" <ident> { <value> } ")" <var> ::= "$" <ident>I think you’ve just invented Jsonnet, but with C integration.
Time to read this if you haven’t already
The json spec is not versioned. There were two changes to it in 2005 (the removal of comments
See, this is why we can’t have nice things.
They’re not supposed to contain data, but some parsers will allow you to access what’s written into comments. And so, of course, someone made use of that and I had to extract what was encoded basically like that:
<!-- Host: toaster, Location: moon, --> <data>Actual XML follows...</data>My best guess is that they added this data into comments rather than child nodes or attributes, because they were worried some of the programs using this XML would not be able to handle an extension of the format.
Their stated justification is that people would abuse comments, using them to carry semantic or syntactic information. That’s a shit justification IMO.
As far as the additional complexity that comments bring, I understand that from a technical perspective but from an engineering-for-real-humans-in-the-real-world perspective that’s the kind of thing you just have to deal with if you want to design a good format.
$($var) (e.g. Write-Host “Example-Issue: $($IssueVariable)”)$($IssueVariable) isnt interpreted as a string by PowerShell.
Or by configuring your parser.
I do agree there are plenty of annoyances that shouldn’t exist in YAML but do because someone had an opinionated belief at one point, though. For example, it shouldn’t try to guess that “yes”, “no”, “y”, and “n” are truthy values. Let the programmer handle that. If they write true/false, then go ahead and consider those truthy. Times can also be a bit of a pain - iirc writing 12:00 is supposed to be interpreted as 0.5 - but at least that’s something you can work around.
But there’s plenty in that article that are only problems because the writer made them problems. Every language lets you make mistakes, markup languages aren’t any different. It’s not a bad thing that you can write strings without quotes. It’s not forcing you to do so. Anchors also make it simple to reuse YAML and they’re completely optional. The issue with numbers (1.2 stays as 1.2 while 1.2.3 becomes “1.2.3” is very nitpicky. It’s completely reasonable for it to try to treat numbers as numbers where it can. If type conversion is that big of an issue for you, then I really doubt you know what you’re doing.
On top of all this, YAML is just a superset of JSON. You can literally just paste JSON into your YAML file and it’ll process it just fine.
I’m not saying it’s perfect, but if you want something that’s easy to read and write, even for people who aren’t techy, YAML is probably the best option.
I like this. I also like yaml, I’ve had very few issues with it and it’s nicer to work with than json.
Json’s lack of support for trailing commas and comments makes it very annoying for everyday use.