The lost art of XML — mmagueta

https://programming.dev/post/44564394

The lost art of XML — mmagueta - programming.dev

> There exists a peculiar amnesia in software engineering regarding XML. Mention it in most circles and you will receive knowing smiles, dismissive waves, the sort of patronizing acknowledgment reserved for technologies deemed passé. “Oh, XML,” they say, as if the very syllables carry the weight of obsolescence. “We use JSON now. Much cleaner.”

I love XML, when it is properly utilized. Which, in most cases, it is not, unfortunately.

JSON > CSV though, I fucking hate CSV. I do not get the appeal. “It’s easy to handle” – NO, it is not.

JSON is a reasonable middle ground, I’ll give you that

Biggest problem is, CSV is not a standardized format like JSON. For very simple cases it could be used as a database like format. But it depends on the parser and that’s not ideal.

Exactly. I’ve seen so much data destroyed silently deep in some bioinformatics pipeline due to this that I’ve just become an anti CSV advocate.

Use literally anything else that doesn’t need out of band “I’m using this dialect” information that has to match to prevent data loss.

CSV >>> JSON when dealing with large tabular data:

  • Can be parsed row by row
  • Does not repeat column names, more complicated (so slower) to parse
  • 1 can be solved with JSONL, but 2 is unavoidable.

    Yes…but compression

    And with csv you just gotta pray that you’re parser parses the same as their writer…and that their writer was correctly implemented…and they set the settings correctly

    Compression adds another layer of complexity for parsing.

    JSON can also have configuration mismatch problems. Main one that comes to mind is case (in)sensitivity for keys.

    Nahh your nitpicking there, large csvs are gonna be compressed anyways

    In practice I’ve never met a Json I cant parse, every second csv is unparseable

    No:

    • CSV isn’t good for anything unless you exactly specify the dialect. CSV is unstandardized, so you can’t parse arbitrary CSV files correctly.
    • you don’t have to serialize tables to JSON in the “list of named records” format

    Just user Zarr or so for array data. A table with more than 200 rows isn’t ”human readable” anyway.

    { "columns": ["id", "name", "age"], "rows": [ [1, "bob", 44], [2, "alice", 7], ... ] }

    There ya go, problem solved without the unparseable ambiguity of CSV

    Please stop using CSV.

    Great, now read it row by row without keeping it all in memory.

    Wdym? That’s a parser implementation detail. Even if the parser you’re using needs to load the whole file into memory, it’s trivial to write your own parser that reads those entries one row at a time. You could even add random access if you get creative.

    That’s one of the benefits of JSON: it is dead simple to parse.