Tag Archives: sexps

XML: Xtensively Mucked-up Lists (or “How A Committee Screwed Up Sexps”)

Some folks are puzzled at why I avoid XML. They just can’t understand why I avoid it whenever I can and do crazy things like write ASN.1 specs, use native language terms when possible (like Python config files consisting of Python dicts, Erlang configs consisting of Erlang terms, etc.), consider YAML/JSON a decent last resort, and regard XML as a non-option.

I maintain that XML sucks. I believe that it is, to date, the most perfectly horrible corruption of one of the most universal and simple concepts in computer science: sexps.

ZOMG! Someone screwed up sexps!

Let that one sink in. What a thing to say! How in the world would one even propose to screw up such a simple idea? Let’s consider an example…

Can you identify the semantic difference among the following examples?
(Inspired by the sample XML in the Python xml.etree docs)

Verson 1

<country name="Liechtenstein">
  <rank>1</rank>
  <year>2008</year>
  <gdppc>141100</gdppc>
  <neighbor name="Austria" direction="E"/>
  <neighbor name="Switzerland" direction="W"/>
</country>

Version 2

<country>
  <name>Liechtenstein</name>
  <rank>1</rank>
  <year>2008</year>
  <gdppc>141100</gdppc>
  <neighbor>
    <name>Austria</name>
    <direction>E</direction>
  </neighbor>
  <neighbor>
    <name>Switzerland</name>
    <direction>W</direction>
  <neighbor>
</country>

Version 3

<country name="Liechtenstein" rank="1" year="2008" gdppc="141100">
  <neighbor name="Austria" direction="E"/>
  <neighbor name="Switzerland" direction="W"/>
</country>

Version 4

And here there is a deliberate semantic difference, meant to be illustrative of a certain property of trees… which is supposedly the whole point.

<entries>
  <country rank="1" year="2008" gdppc="141100">
    <name>Liechtenstein</name>
    <neighbors>
      <name direction="E">Austria</name>
      <name direction="W">Switzerland</name>
    </neighbors>
  </country>
</entries>

Which one should you choose for your application? Which one is obvious to a parser? From which could you more than likely write a general parsing routine that could pull out data that meant something? Which one could you turn into a program by defining the identifier tags as functions somewhere?

Consider the last two questions carefully. The so-called “big data” people are hilarious, especially when they are XML people. There is a difference between “not a large enough sample to predict anything specific” and “a statistically significant sample from which generalities can be derived”, certainly, but that has a lot more to do with representative sample data (or rather, how representative the sample is) than the sheer number of petabytes you have sitting on disk somewhere. “Big Data” should really be about “Big Meaning”, but we seem to be so obsessed over the medium that we miss the message. Come to think of it, this is a problem that spans the techniverse — it just happens to be particularly obvious and damaging in the realm of data science.

The reason I so hate XML is because the complexity and ambiguity introduced in an effort to make the X in XML mean something has crippled it in terms of clarity. What is a data format if it confuses the semantics of the data? XML is unnecessarily ambiguous to the people who have to parse (or design, document, discuss, edit, etc.) XML schemas, and makes any hope of readily converting some generic data represented as XML into a program that can extract its meaning without going to the extra work of researching a schema — which throws the entire concept of “universality” right out the window.

Its all a lie. A tremendous amount of effort has been wasted over the years producing tools that do nothing more than automate away the mundane annoyances dealing with the stupid way in which the structure is serialized. These efforts have been trumpeted as a major triumph, and yet they don’t tell us anything about the resulting structure, which itself is still more ambiguous than plain old sexps would have been. Its not just that its a stupid angle-bracket notation when serialized (that’s annoying, but forgiveable: most sexps are annoying paren, obnoxious semantic whitespace, or confusing ant-poop delimited — there just is no escape from the tyranny of ASCII). XML structure is broken and ambiguous, no matter what representation it takes as characters in a file.

JSON and YAML: Not a pair that fits every foot (but XML sucks)

It is good to reflect on exactly how hard a problem it is to define a consistent cross-platform data representation. Most of the time (especially on the web) we just shovel data around, let things be inconsistent, avoid conflicts by pretending they don’t happen, and carry a general disregard to data consistently. This attitude is, sadly, what has come to characterize “NoSQL” in my mind, though in a strict sense that is not true at all (GIS and graph databases aren’t SQL systems, and some are very solid — PostGIS being the exception in that it is a surprisingly well made extension to a surprisingly solid SQL-based RDBMS).

Obviously this isn’t a good attitude to have when dealing with things more important than small games or social media distractions. That said, most of the code written today seems to fall into those two categories, and many a career is spent exclusively roaming the range between these two (and whether we should consider most of the crap that constitutes the web a “game” itself is worth thinking about, whether we think of SEO, mindshare in the blogosphere, StackExchange rep, Facebook likes/friends/whatever, pingbacks, comment counts, etc.). We focus so much on these trivial and often meaningless cases that an entire generation of would-be programmers has no idea what the shape of data is really about.

When you really need a consistent data representation that can survive the network (ouch! that’s no mean feat!), can consistently be coerced into a known, predictable, serialized representation, and can be handled by generated code in nearly any language you need ASN.1.

But ASN.1 is hard to learn (or even find resources on outside of telecom projects), and JSON and YAML are easy to reference and (initially) use. XML was made unnecessarily hard, I think as a cosmic joke on people who never heard the term “S-expression”, but very basic XML seems easy, even if its something you would never want to type by hand (though that always seems to wind up being necessary, despite our best efforts at tooling…).

Why not just use JSON, YAML or XML everywhere? That bit above, about a consistent representation — that’s why. Well, that’s part of why. Another part of why is that despite your best efforts to define things in XML or nest explicit declarations in YAML/JSON you will always wind up either missing something, or find yourself needing to change some type information you embedded as a nested element in your data definition and then need to write a sed or awk script just to modify later (and if you’re the type who thinks “ah, a simple search/replace in my IDE…” and “IDE” to you doesn’t basically equate to “Emacs” or your shell itself, you’re going to a gunfight with boxing gloves on — if you need a better IDE to manage your language then you really need a better language).

The problem with YAML/JSON/XML are twofold: they are not defined anywhere, so while you may have a standard of sorts somewhere, there is no way to enforce that standard. An alternative is to include type information everywhere within your tags as attributes (in XML) or nest tagged groups or create a massive reference chain of type -> reference pointer -> data entry in YAML (or nest everything to an insane degree in JSON), but then making changes to the type of a field in a record type you have 20 million instances of is… problematic.

And we haven’t even discussed representation. “But its all text, right?” Oh… you silly person…

Everything is numbers. Two numbers, actually, 0 and 1. We know this, and we sort of forget that there are several ways of interpreting those numbers as bigger numbers, and those bigger numbers as textual components, and those textual components as actual text and that actual text (finally) as glyph you see when you use “the typewriter part” or look at “the TV part” (or do anything with the little touchscreens we use everywhere these days nobody seems to have worked out a genuinely solid interface solution to just yet).

Every layer of that chain of interpretation I mentioned above can be done several ways. Every layer. Think about that for a second. Now, if you live purely in a single world (like modern Linuxes and probably newer versions of OSX) where there is only UTF-8, then about half the possible permutations are eliminated. If you only ever deal with unaccented characters that fall in the primary 127 defined by ASCII, then several more permutations are eliminated — and you should dance with joy.

Unless you deal with a bit of non-textual data in addition to the textual stuff. You know, like pictures and sounds and application-produced opaque binary data and whatnot. If that’s the case, you should tremble. Or… oh god, no… what if your data doesn’t stand alone? What if all those letters are supposed to actually mean something? “We have lots of data” isn’t nearly as important to customers as “we have lots of meanings” — but don’t ask a customer about that directly, they have no idea what you mean, because all the text stuff already means something to them.