Archive for the ‘Science & Tech’ Category

Erlang: Getting Started Without Melting

Wednesday, July 11th, 2018

There are two things that might be meant when someone references “Erlang”: the language, and the environment (the EVM/BEAM and OTP). The first one, the language part, is actually super simple and quick to learn. The much larger, deeper part is learning what the BEAM does and how OTP makes your programs better.

It is clear that without an understanding of Erlang we’re not going to get very far in terms of understanding OTP and won’t be skilled enough to reliably interact with the runtime through a shell. So let’s forget about the runtime and OTP for a bit and just aim at the lowest, most common beginners’ task in coding: writing a script that tells me “Hello, World!” and shows whatever arguments I pass to it from the command line:

#! /usr/bin/env escript

% Example of an escript
-mode(compile).

main(Args) ->
    ok = io:setopts([{encoding, unicode}]),
    ok = io:format("Hello, world!~n"),
    io:format("I received the args: ~tp~n", [Args]).

Let’s save that in a file called e_script, run the command chmod +x e_script to make it executable, and take a look at how this works:

ceverett@takoyaki:~$ ./e_script foo bar
Hello, world!
I received the args: ["foo","bar"]
ceverett@takoyaki:~$

Cool! So it actually works. I can see a few things already:

  1. I need to know how to call some things from the standard library to make stuff work, like io:format/2
  2. io:setopts([{encoding, unicode}]) seems to makes it OK to print UTF-8 characters to the terminal in a script
  3. An escript starts execution with a traditional main/1 function call

Some questions I might have include how or why we use the = for both assignment and assertion in Erlang, what the mantra “crash fast” really means, what keywords are reserved, and other issues which are covered in the Reference Manual (which is surprisingly small and quick to read and reference).

An issue some newcomers encounter is that navigating an unfamiliar set of documentation can be hard. Here are the most important links you will need to know to get familiar and do useful things with the sequential language:

This is a short list, but it is the most common links you’ll want to know how to find. It is also easy to pull up any given module for doing a search for “erlang [module name]” on any search engine. (Really, any of them.)

In the rare case that erlang.org is having a hard time I maintain a mirror of the docs for various Erlang release versions here as well: http://zxq9.com/erlang/

Start messing with sequential Erlang. Don’t worry about being fancy and massively concurrent or maximizing parallelization or whatever — just mess around at first and get a feel for the language using escript. It is a lot of fun and makes getting into the more fully encompassing instructional material much more comfortable.

Erlang: ZJ v1.0.1 + JSON Test Suite (aka “What happens when you go public”)

Friday, June 29th, 2018

TL;DR

ZJ has been run through the JSON Test Suite at the recommendation of Michał Muskała, patched for compliance to all the cases required (and many optional behaviors), and now updated to v1.0.1. Complete results on this ZJ wiki page.

Story

When I wrote ZJ my intention was only to scratch my own little itch: I needed a tiny, portable, single module JSON encoder/decoder for use on a relatively restricted set of data. I wanted a particular mapping between types, and writing it to work the way I wanted was easy.

I announced it when I felt meaningfully “done” because I thought someone else might like it. Some did. Some had suggestions. Some folks wondered where the tests were or if I was going to include any. I didn’t give these things much thought because it did what I wanted it to do already.

But then Oleg Tarasenko — someone I know well, have worked with in the past and respect very much — made a public query. In short, “Hey, what’s the plan with this? I want to use it, too.” Well crap. I don’t really have the time — or rather, I have other things I’d rather spend my time on but it’s Oleg and so I suddenly care again. So now I’m roped into something — maybe not hand-writing a bunch of tests or cranking up PropEr (which I’ve never been good at teaching how to create structured string input like JSON), but since the wonderful JSON Test Suite project already exists I can at least validate it as a JSON decoder using its pre-built example data.

With everything fresh in my mind it only took a few hours to clean everything up to the point that ZJ is actually now fully compliant with all the “must” features, many of the implementation specific features, and a few extensions that are technically illegal but occur frequently in practice.

That’s the sort of thing that happens when you share you work. It starts off as a “works for me” tool, and then your friends get a hold of it. Because you’ve known them a while and trust them you care a lot about what they have to say — even when they are just making gentle recommendations (or maybe especially when they are making gentle recommendations). The thing is, your friends are your friends because they have your best interests at heart, and in a software community just as in a neighborhood community very often your best interests are also their best interests. In fact, having another option for JSON parsing is better for everyone because there really is no One True Mapping, and it is especially good for everyone if a core piece of generic infrastructure has as permissible a license as possible (thanks, Marc) and the project can verify its claims to correctness — or at least vet its level of wrongness.

Zj is a tiny project, but even such a small project demonstrates the positive dynamic that exists among people within a community. So thanks Oleg Tarasenko, Michał Muskała, Loïc Hoguin, and others who took the time to discuss the vagaries of JSON and prod me a little to clean things up a bit more. I’m much happier with v1.0.1 than v1.0.0 (and will like v1.0.2 even more because of this and that).

Appeal to New Coders

Joe Armstrong convinced me about two years ago that it was better to release half-grown ideas to let them germinate and grow in the light of the sun than rot and be forgotten in the darkness of obscurity and personal temerity. Once again his position is validated. If you’re just getting into FOSS for the first time, release stuff. Don’t pester folks about it, but do announce your work so that eyes other than yours can get on it. Nothing but good comes of that (but sometimes it can be embarrassing — because you’ll occasionally be wrong, or lazy, or wrong because you were lazy…).

Hayabusa2: Approach to Ryugu timelapse

Friday, June 29th, 2018

GIF of the year.

Your tests don’t tell you what you think they do

Tuesday, June 26th, 2018

Yesterday I wrote a tiny JSON encoder/decoder in Erlang. While the Erlang community wasn’t in dire need of yet another JSON parser, the ones I saw around do things just a tiny bit differently than I want them to and writing a module against RFC-8259 isn’t particularly hard or time consuming.

Someone commented on (gasp!) the lack of tests in that module. They were right. I just needed the module to do two things, the code is boring, and I didn’t write tests. I’m such a rebel! Or a villain! Or… perhaps I’m just someone who values my time.

Maybe you’re thinking I’m one of those coding cowboys who goes hog wild on unsafe code! No. I’m not. Nothing could be further from the truth. What I have learned over the last 30 years of fiddling about with software is that hand-written tests are mostly a waste of time.

Here’s what happens:

  1. You write a new thingy.
  2. You throw all the common cases at it in the shell. It seems to work. Great!
  3. Being a prudent coder you basically translate the things you thought to throw at it in the shell into tests.
  4. You hook it up to an actual project you’re using somewhere — and it breaks!
  5. You fix the broken bits, and maybe add a test for whatever you fixed.
  6. Then other people start using it in their projects and stuff breaks quite a lot more ZOMG AHHH!

Where in here did your hand-written tests help out? If you write tests to define the bounds of the problem before you actually wrote your functions then tests might help out quite a lot because they deepen your understanding of the problem before you really tackle it head-on. Writing tests before code isn’t particularly helpful if you already thoroughly understand the problem and just need something to work, though.

When I wrote ZJ yesterday I needed it to work in the cases that I care about — and it did, right away. So I was happy. This morning, however, someone else decided to drop ZJ into their project and give it a go — and immediately ran into a problem! ZJ v0.1.0 returns an error if it finds trailing commas in JSON arrays or objects! Oh noes!

Wait… trailing commas aren’t legal in JSON. So what’s the deal? Would tests have discovered this problem? Of course not, because hand-written tests would have been bounded by the limits of my imagination and my imagination was hijacked by an RFC all day yesterday. But the real world isn’t an RFC, and if you’ve ever dealt with JSON in the wild that you’re not generating you’ll know that all sorts of heinous and malformed crap is clogging the intertubes, and most of it sports trailing commas.

My point here isn’t that testing is bad or always a waste of time, my point is that hand-written tests are themselves prone to the exact same problems the code being tested is: you wrote it so it carried flaws of implementation, design and scope, just like the rest of your project.

“So when is testing good?” you might ask. As mentioned earlier, those cases where you are trying to model the problem in your mind for the first time, before you’ve written any handling code, is a great time to write tests for no other reason than they help you understand the problem. But that’s about as far as I go with hand-writing tests.

The three types of testing I like are:

  • type checks
  • machine generated (property testing)
  • real-world (user testing)

A good type checker like Dialyzer (or especially ghc’s type system, but that’s Haskell) can tell you a lot about your code in very short order. It isn’t unusual at all to have sections of code that are written to do things that are literally impossible, but you wouldn’t know about until much later because, due simply to lack of imagination, quite often hand-written tests would never have executed the code, or not in a way that would reveal the structural error.
Typespecs: USE THEM

Good property testing systems like PropEr and QuickCheck generate and run as many tests as you give them time to (really, it is just constrained by time and computing resources), and once they discover breakages can actually fuzz the problem out to pinpoint the exact failing cases and very often indicate the root cause pretty quickly. It is amazing. If you ever experience this you’ll never want to hand write tests again.
Property Testing: USE IT

What about user testing? It is simply necessary. You’ll never dream up the insane stuff to try that users will, and neither will a property-based test generation system. Your test and development environment will often bear little resemblance to your users’ environments, the things you might think to store in your system will rarely look anything like the sort of stuff they will wind up storing in it, and the frequency of operation that you assumed might look realistic will almost never been anywhere close to the mark.
Your Users: COMMUNICATE WITH THEM

Ultimately, hand-written tests tend to tell a lot more about the author of the tests than the status of the software being tested.

Erlang: Eventually Things Will Change

Wednesday, May 30th, 2018

I finally got a few days to really dedicate to the whole Zomp/ZX thing and wrote some docs.

If you actually click this link soon you’ll see an incomplete pile of poo, but it is a firm enough batch of poo that I can show it now, and you can get a very basic idea what this system is supposed to do:

Zomp/ZX docs

Some pages are missing and things are still a bit self-conflicted. The problem is that until you really use a system like this a bit it is hard to know what the actual requirements need to be. So that’s been a long internal journey.

If my luck holds I’ll have something useful out in short order, though. Here’s to keeping fingers crossed and creating useful on-ramps for new programmers in desperate need of easy-to-use power tools. While we can all only hope the gods will help them when it comes to tackling their actual human-relevant problems, the environment in which they render their solutions should not be actively hostile.

The Steins

Tuesday, April 10th, 2018

And then there’s the family of Stein.
There’s Ger, there’s Epp, and there’s Ein.
Ger’s stories are bunk.
Epp’s programs are junk.
And no one can understand Ein.

Confounding Beginner Question: What is an Erlang Atom and Why is it Useful?

Thursday, February 1st, 2018

Like other Erlangers, I tend to take the atom data type for granted. Coming from another language, however, you might be puzzled at why we have all these little strings that aren’t really strings.

The common definition you’ll hear most frequently is something like:

An atom is a label. Its only meaning is itself.

Well, that’s true, but that also sounds a bit useless to someone coming from Python or R or JavaScript or whatever. So let’s break that down: what is a “label” useful for in programs?

  • Variable names are labels.
  • Function names are labels.
  • Module names are labels.
  • The strings you use as keys in a key/value data structure are labels.
  • The enums and label macros you might use in C for semantically significant internal values are almost exactly like atoms

OK, so we use labels all the time, why don’t any of those other languages have atoms, though? Let’s examine those last two reasons for a moment for a hint why.

In Python strings are objects and while building them is expensive, hashing them can be done ahead of time as a cached operation. This means comparing two strings of arbitrary length for equality is extremely cheap, because it is reduced to a large integer comparison for equality. This is not true in, say, C or Erlang or Lisp unless you build your own data structure to carry around the pre-hashed data. In Python it is simple enough to say:

if 'foo' in some_dict:
  # stuff
else:
  # other stuff

In C, however, string comparison is a bit of a hassle and dealing with string data in a cross-platform environment at all can be super annoying depending the age of the systems you might be interacting with or running/building your code on. In Erlang the syntax of string comparison is super simple, but the overhead is not pre-paid like in Python. So what is the alternative?

We use integer values to represent keys that are semantically meaningful to the program at the time it is written. But integers are hard to remember, so instead of having magic numbers floating all around the place we typically have semantically significant integer values aliased from a text label as a macro. This is helpful so that I don’t have to remember the meaning of code like

if (condition == 42) launch_missiles();
if (condition == 86) eat_kittens();

Instead I can write code like:

#define UNDER_ATTACK    42
#define VILE_UNDERBEAST 86

if (condition == UNDER_ATTACK)    launch_missiles();
if (condition == VILE_UNDERBEAST) eat_kittens();

It is extremely common in programs to have variables or arguments like condition in the above example. It doesn’t matter whether your language has matching (like Erlang, Rust, logic languages, etc.) or uses explicit conditionals like the fake C example above — there will always be a huge number of micro datatypes that carry great semantic significance within your program and only within your program and it is as useful to be able to label these enumerated values in a way that the human coders can understand and remember as it is useful for the computer to be able to compare them as simple integers instead of going to the trouble of string comparison every time your code needs to make a decision (because string comparison entails an arbitrarily long sequence of integer comparisons every single time you compare two strings).

In C we use those macros like above (well, not always; C actually does have super convenient enums that work a lot like atoms, but didn’t when I started using it as a kid in the stone age). In Erlang we just use an atom right there in place. You don’t need a declaration or definition anywhere, the runtime just keeps track of these things for you.

Underneath the hood Erlang maintains a running table of atom label values and translates them to integer values on the way into the system and on the way out of the system. The integer each atom actual resolves to is totally unimportant to you, so Erlang abstracts that detail away, but leaves the machine comparing integer values instead of doing full-string comparisons all over the place.

“But Erlang maps don’t do string comparisons on keys!” you might say.

And indeed, you would be right. Because map keys might be any arbitrary value each key is hashed on the way in, and every time keys are compared the comparing term is hashed the same way, so the end comparison is super fast, but we have to hash the input value first for it to mean anything. With atoms, though, we have a shortcut, because we already know they are both unambiguous integer values throughout the system, and this is a slight win over having to hash first before comparing keys.

In other situations where the comparison values cannot be hashed ahead of time, like function-head matching, however, atoms are a huge win over string comparisons:

-module(atoms).
-export([foo/1, bar/1]).

foo("Some string value that I don't really recall") ->
    {ok, 1};
foo("Some string value that I don't really care about") ->
    {ok, 2};
foo("Where is my cheeseburger?") ->
    {ok, 3};
foo(_) ->
    {error, wonky_input}.

bar(dont_recall) ->
    {ok, 1};
bar(dont_care) ->
    {ok, 2};
bar(cheeseburger) ->
    {ok, 3};
bar(_) ->
    {error, wonky_input}.

I’ve slowed the clockspeed of the system so that we can notice any difference here in microseconds.

1> timer:tc(fun() -> atoms:foo("Some string value that I don't really care about.") end).
{16,{error,wonky_input}}
2> timer:tc(fun() -> atoms:foo("Where is my cheeseburger?") end).
{13,{ok,3}}
3> timer:tc(fun() -> atoms:foo("arglebargle") end).
{12,{error,wonky_input}}
4> timer:tc(fun() -> atoms:bar(dont_care) end).
{9,{ok,2}}
5> timer:tc(fun() -> atoms:bar(cheeseburger) end).                                      
{10,{ok,3}}
6> timer:tc(fun() -> atoms:bar(arglebargle) end).                                        
{10,{error,wonky_input}}

See what happened? The long string that varies only at the tail end from two options in the function head takes 16 microsecond to compare and return a value. The string that differs at the head is evaluated as a bad match for the first two options the moment the very first character is compared. The total mismatch is our fastest return because that string never need be traversed even a single time to know that it doesn’t match any of the available definitions of foo/1. With atoms, however, we see a pretty constant speed of comparison. That speed would not change at all even if the atoms were a hundred characters long in text, because underneath they are all just integer values.

Now take a look back and consider the return values defined for foo/1 and bar/1. They don’t return naked values, they return pairs where the first member is an atom. This is a pretty common technique in Erlang when writing either a library intended for 3rd party use or when defining functions that have side-effecty operations that might fail (here we have pure functions, but I’m just using this as an example). Remember, the equal sign in Erlang is both an assignment operator and an assertion operator, when calling a function that nests its return values you have the freedom to decide whether to crash the current process on an unexpected value or to handle the “error” (in which case for your program it becomes an expected condition and not an exception).

blah(Condition) ->
    {ok, Value} = foo(Condition),
    do_stuff(Value).

The code above will crash if the tuple {error, wonky_input} is returned, because the expected atom 'ok' does not match the actually returned atom ‘error’.

bleh(Condition) ->
   case foo(Condition) of
       {ok, Value}          -> do_stuff(Value);
       {error, wonky_input} -> get_new_condition()
   end.

The code above now does not crash on that error return value and instead moves on to get another condition to try out, because the error tuple matches one of the case conditions that is defined as a return value. All this can happen really fast because atoms comparisons are really integer comparisons, and that means we save a ton of processor time (and space) by avoiding string/list or binary comparisons all over the place.

In addition to atoms being a much nicer and dramatically more flexible version of global enumerated types that let us write code in a more natural style that uses normal-language labels for program semantics, it turns out that function and module names are also atoms. This is a really nice feature in itself, because it allows us to write highly dynamic code with a lot less confusion about what types both sides of a call needs to be as well as making the code easier to read. I can even implement my own version of apply/3:

my_apply(Module, Function, Args) ->
    Module:Function(Args).

Of course, there is a whole pile of reasons why you will never want to actually write a function like this in a real program, but that’s the sort of power we have without doing any type casting magic, introspection, or on-the-fly modification of our program, references or memory space.

Once you get used to using atoms and matching you’ll really start to miss them in other languages and wonder how you ever got along without them. Now run off and start writing some code to practice thinking with atoms. They will become natural to you before the day is out.

Zomp/zx: Yet Another Repository System

Tuesday, December 12th, 2017

I’ve been working on a from-source repo system for Erlang on and off for the last few months, contributing time to it pretty much whenever real-life is not interfering. I’m getting close to making a release. Now that my main data bits are worked out, the rest isn’t all that hard. I need to figure out what I want to say in an announcement.

The problem is that I’m really horrible at announcements and this system does things in a pretty different way to other repository systems out there, so I’m not sure what things are going to be important about it to users (worth putting into an announcement) and what things are going to be important to only me because I’m the one who wrote it (and am therefore obsessed with its externally inconsequential internals). What is internally interesting about a project is almost never what is externally interesting about it. Marketing; QED. So I need to sort that out, and writing sometimes helps me sort that kind of thing out.

I’m making this deliberately half-baked, disorganized, over-long post public because Joe Armstrong gave me some food for thought the other day. I had written him my thoughts on a subject posted to a mailing list but sent the message in private. I made my message to him off-list for two reasons: first, I wasn’t comfortable with my way of expressing the idea just yet; and second, I am busy with real-life stuff and side projects, including the repo system, and don’t want to get sucked into online chatter that might amount to nothing more than bikeshedding. (I’m a world-class bikeshedder!) Joe wrote me back asking why I made the reply private, I told him my reasons, and he made me change my mind. He hopes that more people will publish their ideas all the time, good or bad, fully baked or still soggy — because that’s the only way we can ever find any other interesting ideas these days is by searching for them, usually in text, on the net somewhere. It isn’t like we can’t go back and revise, but whether or not we do go back and clean up our literary messes, the availability of core ideas and exposure of thought processes are more important than polish. He’s been on a big drive to make sure that he posts most of his thoughts to public mailing lists or blogs so that his ideas get at least indexed and archived. On reflection I agree with him.

So here I am, trying to publicly organize my thoughts on my repository system.

I should start with the goals of the system.

This system is intended to smooth over a few points of pain experienced when trying to get a new Erlang project off the ground, and in particular avert the path of pain peculiar to Erlang newcomers when they encounter the “how to set up a project” problem. Erlang’s tooling is great but a bit crufty (deeply featured, but confusing to interface with) and not at all what the kool kids expect these days. And anyway I’m really just trying to scratch my own itch here.

At the moment we have two de facto standards for publishing Erlang systems: erlang.mk and Rebar. I like both of these, especially erlang.mk, but they do one thing that annoys me and never seems to quite fit my need: they build Erlang releases.

Erlang releases are great. They cut all the cruft of a release out and pack everything needed to actually run a system into a single blob of digits that you can move, in a single shot, to a new target system — including the Erlang runtime itself. Awesome! Self-contained deployment and it never misses. This has been an Erlang feature since before people even realized that they needed repeatable deployment infrastructure outside of the classic “let’s build a monolithic, static binary executable” approach. (Erlang is perpetually ahead of its time, even by today’s standards. I look at the poor kids stubbing their toes with Docker and language du jour and just shake my head — though part of that is because many shops are using Docker to solve concurrency issues that they haven’t even become cognizant of, thinking that they are experiencing “scaling” problems but missing the point entirely.)

Erlang releases are awesome when the deployment target is an embedded system, but not so awesome if the target is a full-blown operating system, VM, container, or virtual environment fully stocked with gobs of memory and storage and flush with system utilities and resources. Erlang releases sort of kitchen-sink the deployment itself. What if you want to run several different Erlang programs, all delivered as releases, all depending on the same library? You’ve got tons of copies of that library. Which is OK, but still sort of weird, because you also have tons of copies of the runtime (among other things). Each release is self-contained and lean, but in aggregate this is a bit odd.

Erlang releases make sense when you’re deploying to a phone switch or a sensor device in the middle of nowhere and the runtime is basically acting as its own operating system. Erlang releases are, in that context, analogous to putting a Gentoo stage 3 binary image on a system to leapfrog most of the toolchain process. Very cool when you’re in that situation, but a bit tinker-tacky when you’re just trying to run, say, a client program written in Erlang or test a web front-end for something that uses YAWS or Cowboy.

So that’s the siloed-kitchen-sink issue. The other issue is that newcomers are perpetually confused about releases. This makes teaching elementary Erlang hard. In my view we should really focus on escript for beginner code — just let the new guy run something out of a single file the way he is used to doing when learning a new language instead of showing him pages of really slick code, then some interpreter stuff, and then leaping straight from that to a complex and advanced packaging setup necessarily tailored for conducting embedded deployments to slim hardware devices. Seriously. WTF. Escripts give beginners all the power of Erlang necessary for exploring the more interesting bits of code and refactoring needed to learn sequential Erlang with the major advantage of being able to interface with the system the same way programmers from other environments are used to dealing with langauge runtimes like Bash, AWK, Python, Ruby, Perl, etc.

But what about that gap between scripts and full-blown production deployments for embedded hardware?

Erlang has… nothing.

That’s right! There is no agreed-upon way to deploy or even run Erlang code in the same manner a Python coder would expect to execute a python program. There is no virtualenv type system, there is no standard answer to the question “if I’m in the project directory and type ./do_thingy it will just work, right?” The answer is always “Well, it depends…” and what actually winds up happening is that people either roll a whole release just to crank a trivial amount of code up or (quite often) implement an ad hoc way to get the same effect in a lighter-weight way. (erlang.mk shines here, actually.)

Erlang does provide a number of ways to make a system run locally from source of .beam files — and has actually quite reasonable built-in resources for this — but nothing has been built around these tools that also deals with external dependencies, argument passing in a standard way, or any of the other little things you really need if you want to claim a complete solution. Hence all the ad hoc solutions that “work on my machine” but certainly aren’t something you expect your users to use (not with broad success, anyway).

This wouldn’t be such a big problem if it weren’t for the fact that not having any standard way to “just run a program” also means that there really isn’t any standard way to deal with client side code in Erlang. This is a big annoyance for me because much of what I do is client-side code. In Erlang.

In fact, it totally boggles my mind that client-side Erlang isn’t more common, especially considering that AMD is already fielding zillion-core processors for desktops, yet most languages are fundamentally single-threaded. That doesn’t mean you can’t do concurrency and parallelism in other languages, but most problems are not parallel in nature to begin with (parallel problems are easy to write solutions to in any language) while most real-world problems are concurrent. But concurrent systems are hard to write in almost every language. Concurrent problems are the bulk of the interesting problems we’re still not very good at solving with computers. AMD is moving to make the tools available to make much more interesting concurrent processing tools available on the client side (which means Intel will soon start pouring it gajillions worth of blood diamond money into a similar effort), but most languages and environments have no good way to make use of that on the client side. (Do you see why I hear Lady Fortune knocking?)

Browsers? Oh yeah. That’s a great plan. Have you noticed that most sites slowly move toward the “Single Page App” design over time (read as: the web sucks, so now we write full-but-crippled client-programs and deliver them over the web), invest heavily in do-sneaky-things-without-telling-you JavaScript and try to hog every core your system has if you allow it the slightest permission to do so? No. In the age of bitcoin miners embedded in nearly every ad this is not the direction I think we should be envisioning things going.

I want to take better advantage of the cores users have available, and that doesn’t necessarily mean make more efficient use of every cycle as much as it means to make scheduling across processes more efficient to reduce latency throughout the system overall. That’s something users care about quite a lot. This is the problem Erlang has already solved in a way no other runtime out there has. So I want to capitalize on it.

And yet, there is still not standardish way of dealing with code from source, running it locally, declaring or resolving dependencies, or even launching a client-side program at all.

So… how am I approaching it?

I have a project called “zomp” which is a repository system. It is a distributed repository system, so not everything has to be held in one place. Code in the zomp universe is held in little semantic silos called “realms”. Each realm can have whatever packages the owner (sysop) wants it to have. Each realm must have one server node somewhere that is its “prime” — the node in charge of that realm. That node is where system operator tasks for that realm take place, packagers and maintainers submit code for inclusion, where the package index is built, where the canonical copy of everything is stored. Other nodes configured to see that realm connect to the prime node and receive a copy of the current indexes and are tested for availability and published as available resources for querying indexes or downloading packages.

When too many subordinate nodes connect to a prime the prime will redirect a new node to a subordinate, when a subordinate gets “full” of subordinates itself, it picks a subordinate for new redirects itself, etc. so each realm winds up forming a resource tree of mirror nodes that connect back to the realm prime by a single path. A single node might be prime for several realms, or other nodes may act as prime for different realms — and any node can be configured to become a part of any number of realm trees.

That’s the high-level code division.

The zomp constellation is interfaced with via the “zx” program (short for “zomp explorer”, or “zomp exchanger”, or “Zomp eXtreem!”, or homage to the Sinclair ZX-81, or whatever else might lend itself to the letters “zx” that you might want to make up — I actually forget what it originally stood for, but it is remarkably convenient to type so it’s staying that way)

zx is configured to have visibility on zomp realms the same way a zomp node is (in fact, they use the same configuration files and it isn’t weird to temporarily host a zomp node on your desktop the same way you might host a torrent node for a while — the only extra effort is that you do have to open a port, zomp doesn’t (yet) do hole punching magic).

You can tell zx to run a program using the highly counter-intuitive command:

zx run Realm-ProgramName[-Version]

It breaks the program name down into:

  • Realm (optional, defaulting to the main realm of public FOSS packages called “otpr”)
  • Name (necessary — sort of the whole point)
  • Version (which is optional and can also be partial: “1.0.3” vs just “1.0” or “1”, defaulting to the latest in a series or latest overall)

With those components it then contacts any zomp node it knows provides the needed realm, resolves the latest version number of the requested program, downloads and unpacks it, checks and downloads any missing dependencies, builds the program, and launches it. (And if it doesn’t know any active mirrors it asks the prime node and is seeded with known mirror nodes in addition to getting its query answered.)

The packages are kept in a local cache stored at the user level, not the system level (sort of like how browsers keep their JS and page caches) — though if you want to daemonize zomp and run it as a permanent service (if you run a realm prime, for example) then you would want to create an unprivileged system user specifically for the purpose. If you specify a fully-qualified “realm-name-version” for execution and the packages already exist and are built, zx just launches the code directly (which is the majority case, so no delay there — fast startup).

All zomp nodes carry a complete index of their configured realms and can answer queries with very little overhead, but only the prime node has a copy of all the packages for that realm

 

Zomp realms are write-only. There is no facility for removing a package from a realm entirely, only for upgrading the versions of packages whenever necessary. (Removal is, of course, possible, but requires manual intervention by the sysop.)

When a zx client or zomp node asks an upstream node for a package and the upstream node does not have a copy it will query its upstream until the request reaches a node that does have a copy. Once found a “found” notice goes back down to the client telling it how many hops away the package is, and new “hops away” notices are sent as the package is passed downstream toward the original requestor (avoiding timeouts and allowing the user to get some feedback about what is going on). The package is cached at each node along the way, so subsequent requests for that same package will be handled immediately without any more relay downloading.

Because the tree of nodes is expected to be relatively ephemeral and in a constant state of flux, the tendency is for package stores on mirror nodes to be populated by only the latest, most popular packages. This prevents the annoying problem with old realms having gobs of packages that nobody uses but mirror hosts being burdened with maintaining them all anyway.

But why not just keep the latest of everything and ditch old packages?

Ever heard of “version shear”? Yeah. Me too. It sucks. That’s why.

There are no “up to” or “greater than” or “abstract version 3” type dependency declarations in zomp package metadata. As a package maintainer you must explicitly declare the complete version of each dependency in your system. In the case of diamond-shaped dependencies (where two packages in your system depend on slightly different versions of the same package) the burden is on the packagers to declare a version that works for a given release of that package. There are no dependency trees for this reason. If your package depends on X, and X depends on Y and Z then your package must be defined as depending on X, Y and Z — and fully specify the versions involved.

Semver is strictly enforced, by the way. That is, all release numbers are “Major.Minor.Patch”. And that’s it. No more, no less. This is one of the primary criteria for inclusion into a public realm and central to the way both zx and zomp interpret package semantics. If an upstream project has some other numbering scheme the packager will need to create a semver standard of his own. And actually, this turns out to not be very hard in practice. There is one weird side-effect of full, static dependency version declarations and semver: updating dependencies results in incrementing your package’s patch number, so even if you don’t change anything in a program for a long time, a program with many dependencies under heavy development may wind up on version 2.3.257 without much change other than the {deps, PackageIDs}. line in the package meta file.

zx helps make you aware of these situations, so solving them has not been particularly difficult in practice.

Why do things this way?

The “static dependencies forever and ever, amen” decision is a tradeoff between the important feature of fully repeatable builds Erlang releases are famous for (to the point of bug-compatibility between deployment sites — which is critical in production) and the flexibility users and developers have come to expect from source repository systems like pip, pypi, CPAN, etc. Because each realm is write-only there is no danger that a package will be superceded and disappear. The way trickle-down caching works for mirror zomp nodes does not unduly burden the subordinate realm mirrors, and the local caching behavior of zx itself at launch time tends to make all of this mostly delay-free for zx clients and still gives them the option to always run “latest available version” if they want.

And on the note of “latest version”…

Client-side programs are not expected to be run too terribly long at a time. People shut desktop programs down, restart computers, update their kernels, etc. So even if a client program runs a long time (on the order of web, email, IRC, certain games, crypto wallets/miners, torrent nodes, Freenode, Tor, etc) it will still have a chance to restart every few days or weeks to check for a new version (if invoked in a way that omits the version number so that it always queries the latest version).

But what about for long-running server-side type programs? When zx starts a script checks the initial environment and then starts the erlang runtime with zx as its target application, passing it the package ID of the desired program to run and its arguments as arguments. That last sentence was odd. An example is helpful:

zx run foo-bar arg1 arg2 arg3

zx invokes the launching script (a Bash script on Linux, BSD and OSX, a batch file on Windows — so actually the command is zx.bash or zx.cmd)  with the arguments run foo-bar arg1 arg2 arg3. zx receives the instruction “run” and then breaks “foo-bar” into {Realm, Name} = {"foo", "bar"}. Everything after that is passed in as strings which wind up being the input arguments to the program being run: “foo-bar”.

zx registers a process called zx_daemon which remains resident in the runtime and waits for a subscription request or zomp query. Any Erlang program written with the intention of being used with zx can send a message to zx_daemon and ask it to maintain a connection to the program’s parent realm and enroll for update notifications. If the target program itself is the subject of a realm index update then it will get a message letting it know what has changed. The program can respond any way the author wants to such a notification.

In this way it is possible to write a client-side or server-side application that can enroll to become aware of updates to itself without any extra infrastructure and a minimal amount of code. In some programs I’ve used this to cause a pop up notification to appear to desktop users so they know that a new version has become available and they should restart the program (the way Firefox does on Windows). It could also be used to initiate a restart on its own, or whatever else you might come up with.

There are several benefits to developers of using this system as well.

As a developer I can start a new project by doing zx init app [Realm-Name] or zx init lib [Realm-Name] in an existing project root directory and a zomp.meta file will be generated for it, or a new project template directory will be created (populated with a functioning sample skeleton project). I can do zx dailyze and zx will make sure a generally relevant PLT exists or is built (if not up to date) and used to check the typespecs of the project and its dependencies. zx create package [Path] will create a zomp package, sign it, and populate the metadata for it. zomp keygen will generate the kind of keys necessary to interact with a zomp server. zomp submit PackageFilePath will submit a package for review.

And so on.. It is a lot easier to do most things now, and that’s the main point.

(There are commands for reviewing, approving, or rejecting package submissions, adding packagers and maintainers to package projects, adding dependencies to projects, X.Y.Z version incrementing, etc. as well.)

This is about 90% of the way I want it to be, but that means about 90% of the effort remains (pessimistically assuming the 90/10 rule, because life sucks and nobody cares). Most of that is probably going to be finagling some network lunacy, but a lot of the effort is going to be in putting polish to it.

Zomp/zx is based on a similar project I wrote for use within Tsuriai a few years ago that has much sparser features but does basically the same thing: eases packaging and repeatable deployment from source to client systems. I would never release that version publicly because it has a lot of “works for me!” level functionality, but very little polish and requires manually diddling quite a few settings files in error-prone ways (which is fine because it was just us diddling them).

My intention here is to Cadillac this out a bit so that newcomers can slide into the new language and just focus on that language after learning a minimum of tooling commands or environmental details. I think zx init app foo-bar and zx runlocal are a low enough bar for entry.

Web Designers: Stop making SPAs for inherently web 1.0 style sites

Saturday, October 14th, 2017

It is 2017. What’s with the drive to make everything an SPA whether it needs to be or not? This is getting a little ridiculous. I’m going to ramble on below a bit because I’ve got a hankering to do so — pay this no mind.

All around the web I see sites that are best represented as a collection of inter-linked documents, and all around the web I see many of those being changed into single-page application (SPAs). Even more stupid is when the SPA in question was built by some naive dope who included a little bit of almost every JS framework in existence — including a random selection from the thousands of obsolete and dead ones.

What is the goal? What’s the deal? Do web authors today not know how the web was actually intended to work originally? That document publication is actually its reason for existence in the first place and that “web applications” are a new thing that is a backhack to an incomplete standard that only sorta-kinda-works?

Granted, the reason it only sorta-kinda-works is due mostly to the problems inherent in the fact that only a single language is allowed in scripts… which is ridiculous. Was nobody paying attention to the Guile2 approach all those years? The only lesson learned from the Java applet and Flash experience seems to have been that “it sucks to force users to install runtimes as plugins”. Ugh.

Anyway, back to web applications…

I get it. For the moment we don’t have a solid distinction between “a document browser” and “an application browser” so we are stuck with this insufficient worst-of-both-worlds nether region of “applications that masquerade as documents”. And that drives anyone nuts who has given this much thought.

Not that a lot of people have considered the difference deeply. I imagine that is probably because very few new coders today have ever written more than a line or two of code intended to run natively on a user’s local system. Nearly everyone has written thousands of lines of code intended to run natively on server-side systems, but even that is getting wonky because many youngsters today don’t know how to deploy without using Docker yet lack the faintest inkling as to what problems Docker actually is intended to solve and wind up bypassing better solutions when they exist.

Tools shine when they are used in a focused way, performing they job for which they were intended. The web is the same way. Yes, it is a big jumble of crap. So let’s just leave that there. Networks are a big jumble of crap, too, and so are human societies — so we’ve adopted dirty ways of dealing with the dirt. The jumbly pile of shit that is the web is one of our ways of dealing with that. Everything times out. Everything is sent in text. Protocols are bloated and redundant. There isn’t even a proper definition of what “valid” HTML and XML and JSON and whatever else is in most cases. Its all racing toward a singularity where everything is uniformly stupid. But… whatever, it sort of kind of still works — and humans just barely work themselves, so that’s par for the course.

The original web was designed to function as an insecure document publication system where documents could be interlinked. We realized that we could include more interesting stuff by expanding the definition of “document” to include more than just text, and quite recently with HTML5 the way in which documents can be written is only a few orders of magnitude behind, say, LaTeX, in its ability to arrange things on the screen (that’s feature lag is not entirely the fault of the HTML5 authors).

This gives a lot of freedom to website authors — perhaps too much.

If a website is a set of news articles or academic papers (or even tweets) then you really don’t need a SPA, you need a more traditional sort of “web site”. It can be dressed up all pretty with shiny things sprinkled around, of course, but we don’t want a SPA that mysteriously changes state in ways that users cannot bookmark things, can’t easily send links to one another to specific resources (something Twitter got right despite some initial confusion over how to frame their content), etc.

If a website is actually just a delivery front end for a graphical RPG, well, obviously the game part of the site is probably best designed as a SPA, but the rest of the site — the forums, armory, character pages, beastiary, fan wiki, manual, guild rankings, lore pages, etc. — are absolutely best presented outside of that SPA as an actual website.

See the difference?

The game example is actually quite useful to contemplate for a variety of reasons. I’ll probably come back and cut this post down to just that part. Either that or eventually come back and rewrite the first bits to more accurately convey the humor with which I, as a graybeard resident in cyberspace for about 30 years now, view the state of the web today.

Whatever you do, dear reader, have fun coding, and remember: Don’t outsmart yourself!

The most basic Erlang service ⇒ worker pattern

Friday, March 25th, 2016

There has been some talk about identifying “Erlang design patterns” or “functional design patterns”. The reason this sort of talk rarely gets very far is because generally speaking “design patterns” is a phrase that means “things you have to do all the time that your language provides both no primitives to represent, and no easy way to write a library function behind which to hide an abstract implementation”. OOP itself, being an entire paradigm built around a special syntax for writing dispatching closures, tends to lack a lot of primitives we want to represent today and has a litany of design patterns.

NOTE: This is a discussion of a very basic Erlang implementation pattern, and being very basic it also points out a few places new Erlangers get hung up, like the context in which a specific call is — because that’s just not obvious if you’re not already familiar with concurrency at the level Erlang does it. If you’re already a wizard, this article probably isn’t for you.

But what about Erlang? Why have so few design patterns (almost none?) emerged here?

The main reason is that what would have been design patterns in Erlang have mostly become either functional abstractions or OTP (here “OTP” refers to the framework shipped with Erlang). This is about as far as the need for patterns has needed to go in the most general case. (Please note that it very often is possible to write a framework that implements a pattern, though it is very difficult to make such frameworks completely generic.)

But there is one thing the ole’ Outlaw Techno Psychobitch doesn’t do for us that quite a few of us do have a common need for but we have to discover for ourselves: how to create a very basic arrangement of service processes, supervisors, and workers that spawn workers according to some ongoing global state or node configuration. (Figuring this out is almost like a rite of passage for Erlangers — and often even experienced Erlangers have never distilled this down to a pattern, even if many projects do eventually evolve into something structured similarly.)

The case I will describe below involves two things:

  • There is some service you want to create that is represented by a named process that manages it and acts as its sole interface. Higher-level code in the system doesn’t want to call low-level code to get things done, the service should know how to manage itself.
  • There is some configurable state that is relevant to the service as a whole, should be remembered, and you should not be forced to pass in as arguments every time you call for this work to be done.

For example, let’s say we have an artificial world written in Erlang. Let’s say its a game world. Let’s say mob management is abstracted behind a single mob manager service interface. You want to spawn a bunch of monster mobs according to rules such as blahlblahblah… (Who cares? The game system should know the details, right?) So that’s our task: spawning mobs. We need to spawn a bunch of monster mob controller processes, and they (of course) need to be supervised, but we shouldn’t have to know all the details to be able to tell the system to create a mob.

The bestiary is really basic config data that shouldn’t have to be passed in every time you call for a new monster to be spawned. Maybe you want to back up further and not even want to have to specify the type of monster — perhaps the game system itself should know what the correct spawn/live percentages are for different types of mobs. Maybe it also knows the best way to deal with positioning to create a playable density, deal with position conflicts, zone conflicts, leveling or phasing influences, and other things. Like I said already: “Who cares?”

Wait, what am I really talking about here? I’m talking about sane defaults, really. Sane defaults that should rule the default case, and in Erlang that generally means some sane options that are comfortably curried away in the lowest-arity calls to whatever the service functions are.  But from whence come these sane defaults? The service state, of course.

So now that we have our scenario in mind, how does this sort of thing tend to work out? As three logical components:

  • The service interface and state keeper, let’s call it a “manager” (typically shortened to “man”)
  • The spawning supervisor (typically shortened to “sup”)
  • The spawned thingies (not shortened at all because it is what it is)

How does that typically look in Erlang? Like three modules in this imaginary-but-typical case:

  • game_mob_man.erl
  • game_mob_sup.erl
  • game_mob.erl

The game_mob_man module represents the Erlang version of a singleton, or at least something very similar in nature: a registered process. So we have a definite point of contact for all requests to create mobs: calling game_mob_man:spawn_mob/0,1,... which is defined as

spawn_mob() ->
    spawn_mob(sane_default()).

spawn_mob(Options) ->
    gen_server:cast(?MODULE, {beget_mob, Options}).

 

Internally there is the detail of the typical

handle_cast({beget_mob, Options}, State) ->
    ok = beget_mob(Options, State),
    {noreply, State};
%...

and of course, since you should never be putting a bunch of logic or side-effecty stuff in directly in your handle_* function clauses beget_mob/2 is where the work actually occurs. Of course, since we are talking about common patterns, I should point out that there are not always good linguistic parallels like “spawn” ⇒ “beget” so a very common thing to see is some_verb/N becomes a message {verb_name, Data} becomes a call to an implementation do_some_verb(Data, State):

spawn_mob(Options) ->
    gen_server:cast(?MODULE, {spawn_mob, Options}).

%...

handle_cast({spawn_mob, Options}, State) ->
    ok = do_spawn_mob(Options, State),
    {noreply, State};

% ...

do_spawn_mob(Options, State = #s{stuff = Stuff}) ->
    % Actually do work in the `do_*` functions down here

The important thing to note above is that this is the kind of registered module that is registered under its own name, which is why the call to gen_server:cast/2 is using ?MODULE as the address (and not self(), because remember, interface functions are executed in the context of the caller, not the process defined by the module).

Also, are the some_verb/N{some_verb, Data}do_some_verb/N names sort of redundant? Yes, indeed they are. But they are totally unambiguous, inherently easy to grep -n and most importantly, give us breaks in the chain of function calls necessary to implement abstractions like managed messaging and supervision that underlies OTP magic like the gen_server itself. So don’t begrudge the names, its just a convention. Learn the convention so that you write less annoyingly mysterious code; your future self will thank you.

So what does that have to do with spawning workers and all that? Inside do_spawn_mob/N we are going to call another registered process, game_mob_sup. Why not just call game_mob_sup directly? For two reasons:

  1. Defining spawn_mob/N within the supervisor still requires acquisition of world configuration and current game state, and supervisors do not hold that kind of state, so you don’t want data retrieval tasks or evaluation logic to be defined there. Any calls to a supervisor’s public functions are being called in the context of the caller, not the supervisor itself anyway. Don’t forget this. Calling the manger first gives the manager a chance to wrap its call to the supervisor in state and pass the message along — quite natural.
  2. game_mob_sup is just a supervisor, it is not the mob service itself. It can’t be. OTP already dictates what it is, and its role is limited to being a supervisor (and in this particular case of dynamic workers, a simple_one_for_one supervisor at that).

So how does game_mob_sup look inside? Something very close to this:

-module(game_mob_sup).
-behavior(supervisor).

%%% Interface
spawn_mob(Conf) ->
    supervisor:start_child(?MODULE, [Conf]).

%%% Startup
start_link() ->
    supervisor:start_link({local, ?MODULE}, ?MODULE, []).

init([]) ->
    RestartStrategy = {simple_one_for_one, 5, 60},
    Mob = {game_mob,
           {game_mob, start_link, []},
           temporary,
           brutal_kill,
           worker,
           [game_mob]},
    Children = [Mob],
    {ok, {RestartStrategy, Children}}.

(Is it really necessary to define these things as variables in init/1? No. Is it really necessary to break the tuple assigned to Mob vertically into lines and align everything all pretty like that? No. Of course not. But it is pretty darn common and therefore very easy to catch all the pieces with your eyes when you first glance at the module. Its about readability, not being uber l33t and reducing a line count nobody is even aware of that isn’t even relevant to the compiled code.)

See what’s going on in there? Almost nothing. That’s what. The interesting part to note is that very little config data is going into the supervisor at all, with the exception of how supervision is set to work. These are mobs: if they crash they shouldn’t come back to life, better to leave them dead and signal whatever keeps account of them so it can decide what to do (the game_mob_man, for example, which would probably be monitoring these). Setting them as permanent workers can easily (and hilariously) result in a phenomenon called “highly available mini bosses” — where a crash in the “at death cleanup” routine or the mistake of having the mob’s process retire with an exit status other than 'normal' causes it to just keep coming back to life right there, in its initial configuration (i.e. full health, full weapons, full mana, etc.).

But what stands above this? Who supervises the supervisor?

Generally speaking, a component like mob monsters would be a part of a larger concept of world objects, so whatever the world object “service” concept is would sit above mobs, and mobs would be one component of world entities in general.

To sum up, here is a craptastic diagram:

Yes, my games involve wildlife and blonde nurses.

Yes, my games involve wildlife and blonde nurses.

The diagram above shows solid lines for spawn_link, and dashed lines to indicate the direction of requests for things like spawn_link. The diagram does not show anything else. So monitors, messages, etc. are all just not there. Imagine them. Or don’t. That’s not the point of this post.

“But wait, I see what you did there… you made a bigger diagram and cut a bunch of stuff out!”

Yep. I did that. I made an even huger, much crappier, more inaccurate diagram because I wasn’t sure at first where I wanted to fit this into my imaginary game system.

And then I got carried away and diagrammed a lot more of the supervision tree.

And then I though “Meh, screw it, I’ll just push this up to a rough imagining of what it might look like pushed all the way back to the SuperSup”.

Here is the result of that digression:

It wouldn't look exactly like this, so use your imagination.

It wouldn’t look exactly like this, so use your imagination.

ALL. THAT. SUPERVISION.

Yep. All that. Right there. That’s why its called a “supervision tree” instead of a “supervision list”. Any place in there you don’t have a dependency between parts, a thing can crash all by itself and not bring down the system. Consider this: the entire game can fail and chat will still work, users will still be logged in, etc. Not nearly as big a deal to restart just that one part. But what about ItemReg? Well, if that fails, we should probably squash the entire item system (I’ve got guns, but no bullets! or whatever) because game items are critical data. Are they really critical data? No. But they become critical because gamers are much more willing to accept a server interruption than they are losing items and having bad item data stored.

And with that, I’m out! Hopefully I was able to express a tiny little bit about one way supervision can be coupled with workers in the context of an ongoing, configured service that lives within a larger Erlang system and requires on-the-fly spawning of supervised workers.

(Before any of you smarties that have been around a while and point out how I glossed over a few things, or how spawning a million items as processes might not be the best idea… I know. That’s not the point of this post, and the “right approach” is entirely context dependent anyway. But constructive criticism is, as always, most welcome.)