Author Archives: zxq9

Erlang: Silly way to see if your shell supports VT100 commands

There are a few cases where it can be useful to use VT100 terminal commands in shell interaction scripts to draw frames, progressbars, menu lines, position the cursor, clear the screen, colorize text, etc.

I actually have a small library of utilities like this I might eventually release, but its a pretty niche need.

Anyway, within that niche need, here is a really silly way to see if the terminal you run your shell in supports VT100 commands. (If you’re on Linux using pretty much any prepackaged terminal then your terminal supports VT100 commands, but that is not always so true on Windows, depending on how you are accessing your shell.) Paste the following into your shell:

Z =
  fun() ->
    {ok, S} = gen_tcp:connect("towel.blinkenlights.nl", 23, []),
    ok = gen_tcp:send(S, "\r\n"),
    Q =
      fun R() ->
        receive
          {tcp, S, B} ->
            ok = io:format("~ts", [B]),
            R();
          {tcp_closed, S} ->
            done
          after 60000 ->
            ok = gen_tcp:close(S),
            timeout
        end
      end,
    Q()
  end.

And then do Z().

(I remember seeing this first years ago and had forgotten it was even a thing! Sysop excuses is still live at port 666 as well, btw…)

Erlang: Converting text strings to Erlang terms

We all love file:consult/1 and are familiar now with its inverse function. And of course everyone knows how comfortable it is to use the BIFs term_to_binary/1 and binary_to_term/1,2 to communicate over the network between nodes and even among other networked thingies written in other programming languages using BERT-RPC.

But we still have a gap.

There is not a very well known way to convert a text string that represents Erlang terms directly into a list of actual Erlang terms without writing to a file first and then calling file:consult/1. Most of the time you will never have this problem. But when you do encounter this problem it can be mighty annoying to figure out the steps to convert the string or binary to internal Erlang terms (to the point that I sometimes see people actually write to a temporary file just so they can then call file:consult/1 and then delete the file).

So, let’s take a look:

scan_binary(Bin) ->
    TermString = binary_to_list(Bin),
    scan_string(TermString).

scan_string(TermString) ->
    {_, Strings} = lists:foldl(fun break_terms/2, {"", []}, TermString),
    Tokens = [T || {ok, T, _} <- lists:map(fun erl_scan:string/1, Strings)],
    AbsForms = [A || {ok, A} <- lists:map(fun erl_parse:parse_exprs/1, Tokens)],
    [V || {value, V, _} <- lists:map(fun eval_terms/1, AbsForms)].

break_terms($., {String, Lines}) ->
    Line = lists:reverse([$. | String]),
    {"", [Line | Lines]};
break_terms(Char, {String, Lines}) ->
    {[Char | String], Lines}.

eval_terms(Abstract) ->
    erl_eval:exprs(Abstract, erl_eval:new_bindings()).

You’ll notice that I did not simply use string:lexemes(TermString, [$.]) (the successor to the now obsolete string:tokens/2) to break the original into discrete strings. That is because each string requires a period at the end or else erl_scan:string/1 will reject it. It is dramatically more efficient to run through the string a single time breaking at the periods and adding them back than traversing it once to break it into segments, then traversing every resulting string again just to add a period at the end (which also means an extra traversal of the list of that list to make the adjustments!).

Everything in that happens in scan_string/1 can, of course, crash if there is anything wrong in the input. If used as-is it should probably be run inside of a try..catch clause (and you should almost never, ever be using try..catch in Erlang to begin with, but this is one of the very few cases it is probably a good idea to). That could be accomplished by wrapping it in a non-insane function such as:

-spec maybe_scan(String) -> Outcome
    when String  :: string(),
         Outcome :: {ok, [term()]}
                  | {error, Reason :: term()}.

maybe_scan(String) ->
    try
        Terms = scan_string(String),
        {ok, Terms}
    catch
        error:Reason -> {error, Reason}
    end.

You’ll notice that I have a specific scan_binary/1 and a scan_string/1 also. I haven’t played around with this enough yet to feel comfortable throwing a full-blown io_list() at this, so my assumption is that you’re either reading data in from a file and will have a binary to start with, or would have a string that arrives or is constructed somewhere internally and know that you should flatten it yourself before calling scan_string/1 or maybe_scan/1.

How did I arrive at this?

The larger problem I have had to solve just now is unpacking and reading in configuration data from a large number of tar archives that I receive over the wire. While I could unpack them to disk, then read the file I want with file:consult/1, it is dramatically faster to unpack only the file I wanted from the archive in memory (as the archive itself has never been written to disk anyway), and that leaves me with a binary string of the file contents, but nothing on which I can call file:consult/1. Dhoh!

My solution to that problem was the above. This function has done its work now and I don’t need it anymore, but it strikes me as not such a crazy situation for other programmers to run into at some point so I’m leaving this here for my future self. I’ll probably include this function in a future version of a convenience library, and at that point I will either refactor it to break down all the possible error returns in a proper way (crash reports from within list operations inside list comprehensions can be mysterious), or decide that the details of an error from, say, erl_scan would be more confusing than its worth and instead provide a more generic return from some interface like maybe_scan/1.

Trump on the DPRK: Exerting Maximal Regime Change Influence

Sitting within the target zone for a North Korean retaliation causes one to contemplate a bit on the state of things. Trump has doubled down on his bellicose rhetoric of “fire and fury” over the course of the last day, and quite a few people are flipping out, as anyone could have predicted. I have received several emails and calls from friends wishing me well if things go south, expressing hopes that various cabinet personalities can reel Trump in and so on.

All of this assumes Trump is nuts. That is far from an accurate portrayal of the situation.

Washington is faced with a very tough choice right now, but one that has only one real option available: Does Washington wait until American cities sit under nuclear threat from a country with a decision making apparatus that is only a single person deep (meaning, ultimately, the strike decision is left up to personal whim and intent), or does it sacrifice non-Americans to protect Americans?

Obviously, the choice is clear: risk Americans instead of risking Americans. To think that any other nation would do any differently is to believe we exist in a parallel universe where altruism reigns, feelings are reasonable goals of achievement and love conquers all. We do not live in that universe.

Let’s be clear: the US will not allow tens of millions of Americans to sit at risk of a North Korean leader who wishes to advance an extortion game against Washington. It will avert that by risking tens of thousands of foreign lives (mostly South Koreans, but also some Japanese and possibly Chinese as well). Even though I live within the zone that might get splatted, I really can’t see any other way for things to be — and let’s remember: this is tens of millions of American lives VS a few tens of thousands of foreigners from Washington’s perspective. Not much of a choice there, even if one is a hardcore humanitarian.

So now that we have established the American calculus, and we’re not deluding ourselves into thinking that management of a nuclear-armed, globally-strike capable North Korea is part of our menu of options, what is Trump going to do about this? How about get the Chinese or Russians to do something instead? Well, that route has already been explored and exhausted. The Chinese enjoy North Korea being a useful problem regionally, so do the South Koreans to some degree, the Russians love having the DPRK act as a consistent policy spoiler for everyone involved, and even the Japanese have leveraged the existence of North Korea from time to time. It was a useful problem for pretty much everyone for quite a long time, and that’s why it has been allowed to fester for so long.

But now things have gotten serious.

The US cannot wait longer than next spring to strike. The decision on exactly when to strike is dependent on weather, mostly. If the Americans believe that the advantage leans to their side in cold weather then we will see a strike sometime between late November and early March. If the advantage would go to the Americans in warmer months then we will see a strike sometime between now and December. Expect the US to ramp up a strike capability from now until whenever and just sit on it to mask the moment of their intent. Sure, nonessential being relocated from the American garrisons in South Korea would be a telltale sign, but I don’t know if Washington would even telegraph its intent that way rather than letting the chips fall where they may. This is serious business, after all. On the other hand, Washington may evacuate nonessential personnel right away and just remove that as an indicator all together very soon. Who knows.

Back to the rhetorical bit Trump threw out the other day and then doubled down on today…

Trump is doing everything but being explicit about his threat to either glass North Korea entirely or commit to a massive conventional strike that comes very close to that. Looking at Trump’s negotiating style since the 1980’s it is very likely that he intends to do exactly that if the situation does not improve — he is not known for bluffing. He also would not have made this decision alone. China has already stated that they would defend North Korea in the event of an American strike, so by elevating it to the level of an absolute conflict Trump is essentially guaranteeing that there would not be any chance for any action to escalate to becoming a regional war because there would not be a North Korea left to defend.

That sounds crazy, but it is not. It ensures a limited scope to the conflict from the start, and that is wise.

From the North Korean perspective, though, it does one more thing: it places every single leader and peasant and their families under threat of annihilation if Pyongyang does not change course in some way. The Chinese have been trying to effect a regime change in Pyongyang unsuccessfully for a few years now. Beijing can’t do it, it is very likely that nobody outside of North Korea can short of a war. Trump’s appeal to an absolute level of violence here is an overt signal to the North Koreans that it is up to them to effect regime change or face total annihilation. There is plenty of hidden opposition to Kim Jong Un in Pyongyang — but unless they feel that Trump is more dangerous to them than their own leader they are unlikely to feel motivated to move. After all, North Korea has had spats with the West hundreds of times over the last several decades — so often that there is almost a script for this sort of thing.

Trump is going off script. He is doing so to evoke a specific survival reaction in the upper leadership in Pyongyang, specifically a reaction against Kim Jong Un. This is probably the best chance anyone has of deposing him: turning his own leadership against him. They might die if they go against Kim Jong Un. They will certainly die if they go against Trump. This is how mutinies are made from the outside. On the outside chance that it comes to an American strike Trump has already guaranteed that a Chinese retaliation would be pointless. A massive strike (nuclear or conventional) would be a huge shock to the world, but the populations of the world are already experiencing hyperbolic rhetorical shock — when the volume has been turned up to 11 for so long there isn’t really anywhere left to go.

Trump is not crazy and his staff have certainly planned out (and are constantly revising) attack plans on North Korea designed to execute a strike devastating enough to limit the scope of any follow-on actions from anyone in the region. He has since moved on to working an influence play directly aimed at the North Korean leadership. This is how the game is played. People today are not used to being forced into situations where one bad option is balanced by an even worse one. Sometimes there is no unicorn to come save the day. The world is only going to turn more harsh in the coming decade — we probably will only remember this as a side show (if we even care to remember it at all).

Erlangers! USE LABELS! (aka “Stop Writing Punched-in-the-Face Code Blocks”)

Do you write lambdas directly inline in the argument list of various list functions or list comprehensions? Do you ever do it even though the fun itself, or the other arguments or return assignment/assertion for the call are too long and force you to scrunch that lambda’s definition up into an inline-multiline ball of wild shit? YOU DO? WTF?!?!? AHHHH!

First off, realize this makes you look like a douchebag for not being polite to other people or your future self whenever you do it. There is a big difference for the human reading between:

%%% From shitty_inline.erl

do_whatever(Keys, SomeParameter) ->
    lists:foreach(fun(K) -> case external_lookup(K) of
                  {ok, V} -> do_side_effecty_thing(V, SomeParameter);
                  {error, R} -> report_some_failure(R)
                end
          end, Keys
    ).

and

%%% From shitty_listcomp.erl

do_whatever(Keys, SomeParameter) ->
    [fun(K) -> case external_lookup(K) of
        {ok, V} -> do_side_effecty_thing(V, SomeParameter);
        {error, R} -> report_some_failure(R) end end(Key) || Key <- Keys],
    ok.

and

%%% From less_shitty_listcomp.erl

do_whatever(Keys, SomeParameter) ->
    ExecIfFound = fun(K) -> case external_lookup(K) of
            {ok, V} -> do_side_effecty_thing(V, SomeParameter);
            {error, R} -> report_some_failure(R)
        end
    end,
    [ExecIfFound(Key) || Key <- Keys],
    ok.

and

%%% From labeled_lambda.erl

do_whatever(Keys, SomeParameter) ->
    ExecIfFound =
        fun(Key) ->
            case external_lookup(Key) of
                {ok, Value}     -> do_side_effecty_thing(Value, SomeParameter);
                {error, Reason} -> report_some_failure(Reason)
            end
        end,
    lists:foreach(ExecIfFound, Keys).

and

%%% From isolated_functions.erl

-spec do_whatever(Keys, SomeParameter) -> ok
    when Keys          :: [some_kind_of_key()],
         SomeParameter :: term().

do_whatever(Keys, SomeParameter) ->
    ExecIfFound = fun(Key) -> maybe_do_stuff(Key, SomeParameter) end,
    lists:foreach(ExecIfFound, Keys).

maybe_do_stuff(Key, Param) ->
    case external_lookup(Key) of
        {ok, Value}     -> do_side_effecty_thing(Value, Param);
        {error, Reason} -> report_some_failure(Reason)
    end.

Which versions force your eyes to do less jumping around? How about which version lets you most naturally understand each component of the code independently? Which is more universal? What does code like this translate to after erlc has a go at it?

Are any of these difficult to read? No, of course not. Every version of this is pretty darn basic and common — you need a listy operation by require a closure over some in-scope state to make it work right, so you really do need a lambda instead of being able to look all suave with a fun some_function/1 type thing. So we agree, taken by itself, any version of this is easy to comprehend. But when you are reading through hundreds of these sort of things at once to understand wtf is going on in a project while also remembering a bunch of other shit code that is laying around and has side effects while trying to recall some detail of a standard while the phone is ringing… things change.

Do I really care which way you do it? In a toy case like this, no. In actual code I have to care about forever and ever — absolutely, yes I do. The fifth version is my definite preference, but the fourth will do just fine also.

(Or even the third, maybe. I tend to disagree with the semantic confusion of using a list comprehension to effect a loop over a list of values only for the side effects without returning a value – partly because this is semantically ambiguous, and also because whenever possible I like every expression of my code to either be an assignment or an assertion (so every line should normally have a = on it). In other words, use lists:foreach/2 in these cases, not a list comp. I especially disagree with using a listcomp when we the main utility of using a list comprehension is normally to achieve a closure over local state, but here we are just calling another closure — so semantic fail there, twice.)

But what about my lolspeed?!?

I don’t know, but let’s see. I’ve created five modules, based on the above examples:

  1. shitty_inline.erl
  2. shitty_listcomp.erl
  3. less_shitty_listcomp.erl
  4. labeled_lambda.erl
  5. isolated_functions.erl

These all call the same helpers that do basically nothing important other than having actual side effects when called (they call io:format/2). What we are interested in here is the generated assembler. What is the cost of introducing these labels that help the humans out VS leaving things all messy the way we imagine might be faster for the runtime?

It turns out that just like with using assignments to document your code, there is zero cost to label functions. For example, here is the assembler for shitty_inline.erl side-by-side with labeled_lambda.erl:

Oooh, look. The exact same stuff!

(This is a screenshot, a text file with the contents shown is here: label_example_comparison.txt)

See? All that annoying-to-read inline lambdaness buys you absolutely nothing. You’re not helping the compiler, you’re not helping the runtime, and you are hurting your future self and anyone you want to work with on the same code later. (Note: You can generate precompiler output with erlc -P and erlc -E, and assembler output with erlc -S. Here is the manpage. Play around with it a bit, BEAM and EVM are amazing platforms, wide open for exploration!)

So use labels.

As for execution speed… all of these perform basically the same, except for the last one, isolated_functions.erl. Here is the assembler for that one: isolated_functions.S. This outperforms the others, though to a relatively insignificant degree. Of course, it is only an “insignificant degree” until that part of the program is the most critical part of whatever your program does — then even a 10% difference may be a really huge win for you. In those cases it is worth it to refactor to test the speed of different representations against each version of the runtime you happen to be using — and all thoughts on mere style have to take a backseat. But this is never the case for the vast majority of our code.

(I’ve read reports in the past that indicate 99% of our performance bottlenecks tend to reside in less than 1% of our code by line count — but I can’t recall the names of any just now. If you happen to find a reference, let me know so I can update this little parenthetical blurb with some hard references.)

My point here is that breaking every lambda out into a separate named functions isn’t always worth it — sometimes an in-place lambda really is more idiomatic and easier to understand simply because you can see everything right there in the same function body. What you don’t want to see is multi-line lambdas squashed into argument lists that make things hard to read and give you the exact same result once compiled as labeling that lambda with a meaningful variable name on another line in the code and then referring to it where it is invoked later.

The most basic Erlang service ⇒ worker pattern

There has been some talk about identifying “Erlang design patterns” or “functional design patterns”. The reason this sort of talk rarely gets very far (just refer to any of the thousands of aborted ML and forums threads on the subject) is because generally speaking “design patterns” is a phrase that means “things you have to do all the time that your language provides both no primitives to represent, and no easy way to write a library function behind which to hide an abstract implementation”. OOP itself, being an entire paradigm built around a special syntax for writing dispatching closures, tends to lack a lot of primitives we want to represent today and has a litany of design patterns.

NOTE: This is a discussion of a very basic Erlang implementation pattern, and being very basic it also points out a few places new Erlangers get hung up on, like what context a specific call is made in — because that’s just not obvious if you’re not already familiar with concurrency at the level Erlang does it. If you’re already a wizard, this article probably isn’t for you.

But what about Erlang? Why have so few design patterns (almost none?) emerged here?

The main reason is what would have been design patterns in Erlang have mostly become either functional abstractions or OTP (“OTP” in this use generally referring to the framework that is shipped with Erlang). This is about as far as the need for patterns has needed to go in the most general case. (Please note that it very often is possible to write a framework that implements a pattern, though it is very difficult to make such frameworks completely generic.)

But there is one thing the ole’ Outlaw Techno Psychobitch doesn’t do for us that quite a few of us do have a common need for but we have to discover for ourselves: how to create a very basic arrangement of service processes, supervisors, and workers that spawn workers according to some ongoing global state or node configuration. (Figuring this out is almost like a rite of passage for Erlangers.)

The case I will describe below involves two things:

  • There is some service you want to create that is represented by a named process that manages it and acts as its sole interface.
  • There is some configurable state that is relevant to the service as a whole, should be remembered, and you should not be forced to pass in as arguments every time you call for this work to be done.

For example, let’s say we have an artificial world written in Erlang. Let’s say its a game world. Let’s say mob management is abstracted behind a single mob manager service interface. You want to spawn a bunch of monster mobs according to rules such as blahlblahblah… (Who cares? The game system should know the details, right?) So that’s our task: spawning mobs. We need to spawn a bunch of monster mob controller processes, and they (of course) need to be supervised, but we shouldn’t have to know all the details to be able to tell the system to create a mob.

The bestiary is really basic config data that shouldn’t have to be passed in every time you call for a new monster to be spawned. Maybe you want to back up further and not even want to have to specify the type of monster — perhaps the game system itself should know generally what the correct spawn/live percentages are for different types of mobs. Maybe it also knows the best way to deal with positioning to create a playable density, deal with position conflicts, zone conflicts, leveling or phasing influences, and other things. Like I said already: “Who cares?”

Wait, what am I really talking about here? I’m talking about sane defaults, really. Sane defaults that should rule the default case, and in Erlang that generally means some sane options that are comfortably curried away in the lowest-arity calls to whatever the service functions are.  But from whence come these sane defaults? The service state, of course.

So now that we have our scenario in mind, how does this sort of thing tend to work out? As three logical components:

  • The service interface and state keeper, let’s call it a “manager” (typically shortened to “man”)
  • The spawning supervisor (typically shortened to “sup”)
  • The spawned thingies (not shortened at all because it is what it is)

How does that typically look in Erlang? Like three modules in this imaginary-but-typical case:

  • game_mob_man.erl
  • game_mob_sup.erl
  • game_mob.erl

The game_mob_man module represents the Erlang version of a singleton, or at least something very similar in nature: a registered process. So we have a definite point of contact for all requests to create mobs: calling game_mob_man:spawn_mob/0,1,... which is defined as

spawn_mob() ->
    spawn_mob(sane_default()).

spawn_mob(Options) ->
    gen_server:cast(?MODULE, {beget_mob, Options}).

 

Internally there is the detail of the typical

handle_cast({beget_mob, Options}, State) ->
    ok = beget_mob(Options, State),
    {noreply, State};
%...

and of course, since you should never be putting a bunch of logic or side-effecty stuff in directly in your handle_* function clauses beget_mob/2 is where the work actually occurs. Of course, since we are talking about common patterns, I should point out that there are not always good linguistic parallels like “spawn” ⇒ “beget” so a very common thing to see is some_verb/N becomes a message {verb_name, Data} becomes a call to an implementation do_some_verb(Data, State):

spawn_mob(Options) ->
    gen_server:cast(?MODULE, {spawn_mob, Options}).

%...

handle_cast({spawn_mob, Options}, State) ->
    ok = do_spawn_mob(Options, State),
    {noreply, State};

% ...

do_spawn_mob(Options, State = #s{stuff = Stuff}) ->
    % Actually do work in the `do_*` functions down here

The important thing to note above is that this is the kind of registered module that is registered under its own name, which is why the call to gen_server:cast/2 is using ?MODULE as the address (and not self(), because remember, interface functions are executed in the context of the caller, not the process defined by the module).

Also, are the some_verb/N{some_verb, Data}do_some_verb/N names sort of redundant? Yes, indeed they are. But they are totally unambiguous, inherently easy to grep -n and most importantly, give us breaks in the chain of function calls necessary to implement abstractions like managed messaging and supervision that underlies OTP magic like the gen_server itself. So don’t begrudge the names, its just a convention. Learn the convention so that you write less annoyingly mysterious code; your future self will thank you.

So what does that have to do with spawning workers and all that? Inside do_spawn_mob/N we are going to call another registered process, game_mob_sup. Why not just call game_mob_sup directly? For two reasons:

  1. Defining spawn_mob/N within the supervisor still requires acquisition of world configuration and current game state, and supervisors do not hold that kind of state, so you don’t want data retrieval tasks or evaluation logic to be defined there. Any calls to a supervisor’s public functions are being called in the context of the caller, not the supervisor itself anyway. Don’t forget this. Calling the manger first gives the manager a chance to wrap its call to the supervisor in state and pass the message along — quite natural.
  2. game_mob_sup is just a supervisor, it is not the mob service itself. It can’t be. OTP already dictates what it is, and its role is limited to being a supervisor (and in this particular case of dynamic workers, a simple_one_for_one supervisor at that).

So how does game_mob_sup look inside? Something very close to this:

-module(game_mob_sup).
-behavior(supervisor).

%%% Interface
spawn_mob(Conf) ->
    supervisor:start_child(?MODULE, [Conf]).

%%% Startup
start_link() ->
    supervisor:start_link({local, ?MODULE}, ?MODULE, []).

init([]) ->
    RestartStrategy = {simple_one_for_one, 5, 60},
    Mob = {game_mob,
           {game_mob, start_link, []},
           temporary,
           brutal_kill,
           worker,
           [game_mob]},
    Children = [Mob],
    {ok, {RestartStrategy, Children}}.

(Is it really necessary to define these things as variables in init/1? No. Is it really necessary to break the tuple assigned to Mob vertically into lines and align everything all pretty like that? No. Of course not. But it is pretty darn common and therefore very easy to catch all the pieces with your eyes when you first glance at the module. Its about readability, not being uber l33t and reducing a line count nobody is even aware of that isn’t even relevant to the compiled code.)

See what’s going on in there? Almost nothing. That’s what. The interesting part to note is that very little config data is going into the supervisor at all, with the exception of how supervision is set to work. These are mobs: if they crash they shouldn’t come back to life, better to leave them dead and signal whatever keeps account of them so it can decide what to do (the game_mob_man, for example, which would probably be monitoring these). Setting them as permanent workers can easily (and hilariously) result in a phenomenon called “highly available mini bosses” — where a crash in the “at death cleanup” routine or the mistake of having the mob’s process retire with an exit status other than 'normal' causes it to just keep coming back to life right there, in its initial configuration (i.e. full health, full weapons, full mana, etc.).

But what stands above this? Who supervises the supervisor?

Generally speaking, a component like mob monsters would be a part of a larger concept of world objects, so whatever the world object “service” concept is would sit above mobs, and mobs would be one component of world entities in general.

To sum up, here is a craptastic diagram:

Yes, my games involve wildlife and blonde nurses.

Yes, my games involve wildlife and blonde nurses.

The diagram above shows solid lines for spawn_link, and dashed lines to indicate the direction of requests for things like spawn_link. The diagram does not show anything else. So monitors, messages, etc. are all just not there. Imagine them. Or don’t. That’s not the point of this post.

“But wait, I see what you did there… you made a bigger diagram and cut a bunch of stuff out!”

Yep. I did that. I made an even huger, much crappier, more inaccurate diagram because I wasn’t sure at first where I wanted to fit this into my imaginary game system.

And then I got carried away and diagrammed a lot more of the supervision tree.

And then I though “Meh, screw it, I’ll just push this up to a rough imagining of what it might look like pushed all the way back to the SuperSup”.

Here is the result of that digression:

It wouldn't look exactly like this, so use your imagination.

It wouldn’t look exactly like this, so use your imagination.

ALL. THAT. SUPERVISION.

Yep. All that. Right there. That’s why its called a “supervision tree” instead of a “supervision list”. Any place in there you don’t have a dependency between parts, a thing can crash all by itself and not bring down the system. Consider this: the entire game can fail and chat will still work, users will still be logged in, etc. Not nearly as big a deal to restart just that one part. But what about ItemReg? Well, if that fails, we should probably squash the entire item system (I’ve got guns, but no bullets! or whatever) because game items are critical data. Are they really critical data? No. But they become critical because gamers are much more willing to accept a server interruption than they are losing items and having bad item data stored.

And with that, I’m out! Hopefully I was able to express a tiny little bit about one way supervision can be coupled with workers in the context of an ongoing, configured service that lives within a larger Erlang system and requires on-the-fly spawning of supervised workers.

(Before any of you smarties that have been around a while and point out how I glossed over a few things, or how spawning a million items as processes might not be the best idea… I know. That’s not the point of this post, and the “right approach” is entirely context dependent anyway. But constructive criticism is, as always, most welcome.)

Hazards of the Windows yen-mark backslash

The fact that Windows fonts still default to displaying backslashes as yen-marks has been a perennial annoyance for me. A conversation about it today provided a wonderful illustration of just how irritating this can be.

Here is what I saw:Here is what someone else saw:

This is just a humorous example of technology gone stupid, but it can be a very real disaster in source code (escapes are suddenly uncertain) and quite a few pieces of small business software (and even modern websites right now) have ridiculous output problems where a price is listed as “\50,000” — which isn’t such a big deal until you have a grid with invisible borders and see something that suddenly looks like a pricing equation instead of a statement of price: “500,000pc \ 50,000” Oops!

KILL THIS WITH FIRE WHEREVER YOU FIND IT.

zUUID: An Example Erlang/OTP Project

I was talking with a friend of mine yesterday about how UUID v2 seems to have evaporated. We looked into things further and found its not actually included in RFC 4122! One thing led to another and I wound up writing an example project that is yet another UUID generator/utility in Erlang — but this time it actually has duplicate v1 and v2 detection/correction and implements as close to what I can find is defined as UUID version 2 values.

As there are already plenty of UUID projects around I focused on making this one as readable as I possibly could — to include exported documentation, in-source documentation, obvious variable names, full typespecs, my silly little “pure” notation, blatantly obvious bitstring syntax, and the obligatory github presence.

Hopefully some folks newish to Erlang will come along and explain to me what confuses them about that code, the process of writing it, the documentation conventions, etc. so that I can become a better literate programmer. Of course, since the last thing the world needs is another UUID implementation I suppose I would have had better luck with something at least peripherally related to the web. (>.<)