Author Archives: zxq9

Erlang: Naive Matrix Multiplication

Someone asked what was surely a homework question today on StackOverflow about matrix multiplication in Erlang. I set out to answer him in as simple a way as possible, but wound up writing a naive matrix generation and multiplication module.

The code to the module might be of interest to new Erlangers, as it adheres both to the style of zuuid and includes many examples of using a combination of list operations and explicit recursion to cut clutter and make the meaning of otherwise complex operations clear.

Here is the code:

%%% @doc
%%% A naive matrix generation, rotation and multiplication module.
%%% It doesn't concern itself with much checking, so input dimensions must be known
%%% prior to calling any of these functions lest you receive some weird results back,
%%% as most of these functions do not crash on input that go against the rules of
%%% matrix multiplication.
%%%
%%% All functions crash on obviously bad values.
%%% @end 

-module(naive_matrix).
-export([random/2, random/3, rotate/1, multiply/2]).

-type matrix() :: [[number()]].


-spec random(Size, MaxValue) -> Matrix
    when Size     :: pos_integer(),
         MaxValue :: pos_integer(),
         Matrix   :: matrix().
%% @doc
%% Generate a square matrix of dimensions {Size, Size} populated with random
%% integer values inclusive of 1..MaxValue.

random(Size, MaxValue) when Size > 0, MaxValue > 0 ->
    random(Size, Size, MaxValue).


-spec random(X, Y, MaxValue) -> Matrix
    when X        :: pos_integer(),
         Y        :: pos_integer(),
         MaxValue :: pos_integer(),
         Matrix   :: matrix().
%% @doc
%% Generate a matrix of dimensions {X, Y} populated with random integer values
%% inclusive 1..MaxValue.

random(X, Y, MaxValue) when X > 0, Y > 0, MaxValue > 0 ->
    Columns = lists:duplicate(X, []),
    Populate = fun(Col) -> row(Y, MaxValue, Col) end,
    lists:map(Populate, Columns).


-spec row(Size, MaxValue, Acc) -> NewAcc
    when Size     :: non_neg_integer(),
         MaxValue :: pos_integer(),
         Acc      :: [pos_integer()],
         NewAcc   :: [pos_integer()].
%% @private
%% Generate a single row of random integers.

row(0, _, Acc) ->
    Acc;
row(Size, MaxValue, Acc) ->
    row(Size - 1, MaxValue, [rand:uniform(MaxValue) | Acc]).


-spec rotate(matrix()) -> matrix().
%% @doc
%% Takes a matrix of {X, Y} size and rotates it left, returning a matrix of {Y, X} size.

rotate(Matrix) ->
    rotate(Matrix, [], [], []).


-spec rotate(Matrix, Rem, Current, Acc) -> Rotated
    when Matrix  :: matrix(),
         Rem     :: [[number()]],
         Current :: [number()],
         Acc     :: matrix(),
         Rotated :: matrix().
%% @private
%% Iterates doubly over a matrix, packing the diminished remainder into Rem and
%% packing the current row into Current. This is naive, in that it assumes an
%% even matrix of dimentions {X, Y}, and will return one of dimentions {Y, X}
%% based on the length of the first row, regardless whether the input was actually
%% even.

rotate([[] | _], [], [], Acc) ->
    Acc;
rotate([], Rem, Current, Acc) ->
    NewRem = lists:reverse(Rem),
    NewCurrent = lists:reverse(Current),
    rotate(NewRem, [], [], [NewCurrent | Acc]);
rotate([[V | Vs] | Rows], Rem, Current, Acc) ->
    rotate(Rows, [Vs | Rem], [V | Current], Acc).


-spec multiply(ValueA, ValueB) -> Product
    when ValueA  :: number() | matrix(),
         ValueB  :: number() | matrix(),
         Product :: number() | matrix().
%% @doc
%% Accept any legal combination of scalar and matrix values to be multiplied.
%% The correct operation will be chosen based on input values.

multiply(A, B) when is_number(A), is_number(B) ->
    A * B;
multiply(A, B) when is_number(A), is_list(B) ->
    multiply_scalar(A, B);
multiply(A, B) when is_list(A), is_list(B) ->
    multiply_matrix(A, B).


-spec multiply_scalar(A, B) -> Product
    when A       :: number(),
         B       :: matrix(),
         Product :: matrix().
%% @private
%% Simple scalar multiplication of a matrix.

multiply_scalar(A, B) ->
    multiply_scalar(A, B, []).


-spec multiply_scalar(A, B, Acc) -> Product
    when A       :: number(),
         B       :: matrix(),
         Acc     :: matrix(),
         Product :: matrix().
%% @private
%% Scalar multiplication is implemented here as an explicit recursion over
%% a list of lists, each element of which is subjected to a map operation.

multiply_scalar(A, [B | Bs], Acc) ->
    Row = lists:map(fun(N) -> A * N end, B),
    multiply_scalar(A, Bs, [Row | Acc]);
multiply_scalar(_, [], Acc) ->
    lists:reverse(Acc).


-spec multiply_matrix(A, B) -> Product
    when A       :: matrix(),
         B       :: matrix(),
         Product :: matrix().
%% @doc
%% Multiply two matrices together according to the matrix multiplication rules.
%% This function does not check that the inputs are actually proper (regular)
%% matrices, but does check that the input row/column lengths are compatible.

multiply_matrix(A = [R | _], B) when length(R) == length(B) ->
    multiply_matrix(A, rotate(B), []).


-spec multiply_matrix(A, B, Acc) -> Product
    when A       :: matrix(),
         B       :: matrix(),
         Acc     :: matrix(),
         Product :: matrix().
%% @private
%% Iterate a row multiplication operation of each row of A over matrix B until
%% A is exhausted.

multiply_matrix([A | As], B, Acc) ->
    Prod = multiply_row(A, B, []),
    multiply_matrix(As, B, [Prod | Acc]);
multiply_matrix([], _, Acc) ->
    lists:reverse(Acc).


-spec multiply_row(Row, B, Acc) -> Product
    when Row     :: [number()],
         B       :: matrix(),
         Acc     :: [number()],
         Product :: [number()].
%% @private
%% Multiply each row of matrix B by the input Row, returning the list of resulting sums.

multiply_row(Row, [B | Bs], Acc) ->
    ZipProd = lists:zipwith(fun(X, Y) -> X * Y end, Row, B),
    Sum = lists:sum(ZipProd),
    multiply_row(Row, Bs, [Sum | Acc]);
multiply_row(_, [], Acc) ->
    Acc.

Hopefully reading that on a blog won’t drive anyone too nuts. I’ll probably include an expanded version of that (or something related) in a convenience library eventually. Unless I forget. Meh.

Web Designers: Stop making SPAs for inherently web 1.0 style sites

It is 2017. What’s with the drive to make everything an SPA whether it needs to be or not? This is getting a little ridiculous. I’m going to ramble on below a bit because I’ve got a hankering to do so — pay this no mind.

All around the web I see sites that are best represented as a collection of inter-linked documents, and all around the web I see many of those being changed into single-page application (SPAs). Even more stupid is when the SPA in question was built by some naive dope who included a little bit of almost every JS framework in existence — including a random selection from the thousands of obsolete and dead ones.

What is the goal? What’s the deal? Do web authors today not know how the web was actually intended to work originally? That document publication is actually its reason for existence in the first place and that “web applications” are a new thing that is a backhack to an incomplete standard that only sorta-kinda-works?

Granted, the reason it only sorta-kinda-works is due mostly to the problems inherent in the fact that only a single language is allowed in scripts… which is ridiculous. Was nobody paying attention to the Guile2 approach all those years? The only lesson learned from the Java applet and Flash experience seems to have been that “it sucks to force users to install runtimes as plugins”. Ugh.

Anyway, back to web applications…

I get it. For the moment we don’t have a solid distinction between “a document browser” and “an application browser” so we are stuck with this insufficient worst-of-both-worlds nether region of “applications that masquerade as documents”. And that drives anyone nuts who has given this much thought.

Not that a lot of people have considered the difference deeply. I imagine that is probably because very few new coders today have ever written more than a line or two of code intended to run natively on a user’s local system. Nearly everyone has written thousands of lines of code intended to run natively on server-side systems, but even that is getting wonky because many youngsters today don’t know how to deploy without using Docker yet lack the faintest inkling as to what problems Docker actually is intended to solve and wind up bypassing better solutions when they exist.

Tools shine when they are used in a focused way, performing they job for which they were intended. The web is the same way. Yes, it is a big jumble of crap. So let’s just leave that there. Networks are a big jumble of crap, too, and so are human societies — so we’ve adopted dirty ways of dealing with the dirt. The jumbly pile of shit that is the web is one of our ways of dealing with that. Everything times out. Everything is sent in text. Protocols are bloated and redundant. There isn’t even a proper definition of what “valid” HTML and XML and JSON and whatever else is in most cases. Its all racing toward a singularity where everything is uniformly stupid. But… whatever, it sort of kind of still works — and humans just barely work themselves, so that’s par for the course.

The original web was designed to function as an insecure document publication system where documents could be interlinked. We realized that we could include more interesting stuff by expanding the definition of “document” to include more than just text, and quite recently with HTML5 the way in which documents can be written is only a few orders of magnitude behind, say, LaTeX, in its ability to arrange things on the screen (that’s feature lag is not entirely the fault of the HTML5 authors).

This gives a lot of freedom to website authors — perhaps too much.

If a website is a set of news articles or academic papers (or even tweets) then you really don’t need a SPA, you need a more traditional sort of “web site”. It can be dressed up all pretty with shiny things sprinkled around, of course, but we don’t want a SPA that mysteriously changes state in ways that users cannot bookmark things, can’t easily send links to one another to specific resources (something Twitter got right despite some initial confusion over how to frame their content), etc.

If a website is actually just a delivery front end for a graphical RPG, well, obviously the game part of the site is probably best designed as a SPA, but the rest of the site — the forums, armory, character pages, beastiary, fan wiki, manual, guild rankings, lore pages, etc. — are absolutely best presented outside of that SPA as an actual website.

See the difference?

The game example is actually quite useful to contemplate for a variety of reasons. I’ll probably come back and cut this post down to just that part. Either that or eventually come back and rewrite the first bits to more accurately convey the humor with which I, as a graybeard resident in cyberspace for about 30 years now, view the state of the web today.

Whatever you do, dear reader, have fun coding, and remember: Don’t outsmart yourself!

Las Vegas shooting prediction: Most casualties were not due to gunshot wounds

Looking over the data for large stampedes and crowd crush events at concerts and sporting events, and comparing this to what I know personally from a career spent mostly handling various weapons in a tactical environment, I expect that we will discover fairly soon that the vast majority of casualties during the Las Vegas shooting — both injuries and fatalities — were actually due to stampede, and not anything to do with gunshot wounds at all.

Of course, in the confusion this issue has become politicized to an absolutely ridiculous degree by various anti-gun factions, and much of the US and European media is loathe to report anything other than anti-gun statistics for the moment, so we are seeing language tailored to evoke images of hundreds of people with actual gunshot wounds and zero people with stampede injuries.

For example: “Shooter in Las Vegas [blah blah blah] over 500 wounded.” This makes the reader or listener immediately envision 500 people actually wounded, as in due to violent trauma — and deliberate violent trauma at that. Which in this case would be exclusively due to gunshot wounds. But we have never seen a breakdown of causes of bodily harm by type, and this data will take a while to assemble.

By the time we do see these stats most people will not really be interested because immigration in Europe or stubborn people in Madrid/Barcelona or NFL SJW activity or whatever else will steal the spotlight and public attention before then. In other words, people will be distracted with another issue-of-the-day by then and forget that the new factoids they see relate to a previous event they felt very strongly about at the time it occurred.

Watch for this one.

Asian Governments Making Social Moves Together

I expect Asian governments to manifest a low-key but characteristically firm and absolute (and often official) position against Islam. Actually, I don’t expect it, I’m watching it happen and just now recognizing a fairly uniform trend. Something is going on in Asia with regard to this, and I don’t know quite what it is, but there is no doubt that doors are closing all across Asia for Muslims in general.

I think the timing is not a coincidence — the nature of Islamic threats are changing, becoming more diffuse, and taking on a different character just as a new generation of indoctrination is beginning across the West and Asia.

  • Myanmar has found something much more compelling than mere domestic political expediency to engage in its current operations (ISIS returners, as are turning up in Malaysia, Indonesia and the Philippines, is one possibility).
  • China has begun confiscating the Koran and categorized it as a book containing extremist political sentiment.
  • Thailand is readying a firm move against the southern Muslim rebels — and at the same time ISIS returners are very effectively influencing the young generation throughout the old Pattani region.
  • Saudis and other donors are standing up madrasas throughout Malaysia and Indonesia, and the Malaysian government is both unable to stop the trend while at the same time higher-ups in Putrajaya are strangely blind to the problem while also complaining about it.
  • The Philippines is obviously on a “you’re with us or against us” path politically and socially. And a certain of portion of the younger Muslim generation today is much more willing to take that as a challenge instead of an offer to pledge fealty (or at least negotiate terms).
  • Japanese are, at least anecdotally, becoming increasingly uneasy with the idea of accepting any Muslims, even as guest workers. The striking thing there is that ten years ago (well after 9/11) the topic of religion would never have been mentioned discussing this issue socially, but now it is brought up. This change over the last year or two coincides with the first mosque in Kyoto trying to promote itself via online ads and Japanese demonstrating an instant and strong aversion to the very concept of proselytization. They are now in “wait and see” mode socially — to watch and see how things turn out in Europe.
  • South Koreans seem to be on the same page as the Japanese — the attitude toward Islam having soured considerably over the last five years or so. Once again, this is anecdotal, but the subject has come up more than once, and many South Koreans keep up with news of attacks in France, Sweden and the UK.
  • Indonesia is seeing the rise of extra-judicial Islamic enforcement gangs.
  • Malaysia is seeing a similar rise in extra-judicial Islamic enforcement gangs, but the effect is somewhat muted by considerable repression by the special police and more active engagement with the group leaders.
  • Returners, returners, returners. ISIS veterans are flooding into various part of Asia, fresh off a tour in Syria, North Africa, Iraq or Afghanistan with ISIS and keeping in touch with one another. Of course, nobody feels comfortable with that. Unlike in Europe, though, well-known jihadis are not left to their own devices and most go missing somewhere in transit — but it is clear and evident that many are still returning and building new lines of communication and influence locally.

Any one of these issues, from official government actions to simple social reactions, would be grounds for certain groups to rally large responses — Islamic groups as well as Western-based political groups with strong anti-Asian nationalist agendas (something I’ve always found very odd). But the only thing making the news is Myanmar right now, and that’s a pretty hopeless fight to try to pick in terms of political pressure. Myanmar is about as pliable as North Korea as long as China is on their side, and China is indeed on their side with regard to this detail.

I do not see a future where Asian governments will feel compelled to do anything other than increase their resistance to an increased domestic Muslim presence. I fully expect that religious questions will be incorporated on visa applications to places like China eventually (not that repression of religion is anything new there).

I have no idea how any of this is going to turn out, but I find this trend notable and the timing troubling. I don’t know exactly what is triggering this much activity just now (why not a decade ago?), but something is clearly going on. It could be the outcome of some government assessments, or simply a change in the domestic social outlook, or both — but something is going on with this. And, of course, it is impossible to say “they are wrong”. It is just what they are doing and I’m just pointing it out.

Erlang: Silly way to see if your shell supports VT100 commands

There are a few cases where it can be useful to use VT100 terminal commands in shell interaction scripts to draw frames, progressbars, menu lines, position the cursor, clear the screen, colorize text, etc.

I actually have a small library of utilities like this I might eventually release, but its a pretty niche need.

Anyway, within that niche need, here is a really silly way to see if the terminal you run your shell in supports VT100 commands. (If you’re on Linux using pretty much any prepackaged terminal then your terminal supports VT100 commands, but that is not always so true on Windows, depending on how you are accessing your shell.) Paste the following into your shell:

Z =
  fun() ->
    {ok, S} = gen_tcp:connect("towel.blinkenlights.nl", 23, []),
    ok = gen_tcp:send(S, "\r\n"),
    Q =
      fun R() ->
        receive
          {tcp, S, B} ->
            ok = io:format("~ts", [B]),
            R();
          {tcp_closed, S} ->
            done
          after 60000 ->
            ok = gen_tcp:close(S),
            timeout
        end
      end,
    Q()
  end.

And then do Z().

(I remember seeing this first years ago and had forgotten it was even a thing! Sysop excuses is still live at port 666 as well, btw…)

Erlang: Converting text strings to Erlang terms

We all love file:consult/1 and are familiar now with its inverse function. And of course everyone knows how comfortable it is to use the BIFs term_to_binary/1 and binary_to_term/1,2 to communicate over the network between nodes and even among other networked thingies written in other programming languages using BERT-RPC.

But we still have a gap.

There is not a very well known way to convert a text string that represents Erlang terms directly into a list of actual Erlang terms without writing to a file first and then calling file:consult/1. Most of the time you will never have this problem. But when you do encounter this problem it can be mighty annoying to figure out the steps to convert the string or binary to internal Erlang terms (to the point that I sometimes see people actually write to a temporary file just so they can then call file:consult/1 and then delete the file).

So, let’s take a look:

scan_binary(Bin) ->
    TermString = binary_to_list(Bin),
    scan_string(TermString).

scan_string(TermString) ->
    {_, Strings} = lists:foldl(fun break_terms/2, {"", []}, TermString),
    Tokens = [T || {ok, T, _} <- lists:map(fun erl_scan:string/1, Strings)],
    AbsForms = [A || {ok, A} <- lists:map(fun erl_parse:parse_exprs/1, Tokens)],
    [V || {value, V, _} <- lists:map(fun eval_terms/1, AbsForms)].

break_terms($., {String, Lines}) ->
    Line = lists:reverse([$. | String]),
    {"", [Line | Lines]};
break_terms(Char, {String, Lines}) ->
    {[Char | String], Lines}.

eval_terms(Abstract) ->
    erl_eval:exprs(Abstract, erl_eval:new_bindings()).

You’ll notice that I did not simply use string:lexemes(TermString, [$.]) (the successor to the now obsolete string:tokens/2) to break the original into discrete strings. That is because each string requires a period at the end or else erl_scan:string/1 will reject it. It is dramatically more efficient to run through the string a single time breaking at the periods and adding them back than traversing it once to break it into segments, then traversing every resulting string again just to add a period at the end (which also means an extra traversal of the list of that list to make the adjustments!).

Everything in that happens in scan_string/1 can, of course, crash if there is anything wrong in the input. If used as-is it should probably be run inside of a try..catch clause (and you should almost never, ever be using try..catch in Erlang to begin with, but this is one of the very few cases it is probably a good idea to). That could be accomplished by wrapping it in a non-insane function such as:

-spec maybe_scan(String) -> Outcome
    when String  :: string(),
         Outcome :: {ok, [term()]}
                  | {error, Reason :: term()}.

maybe_scan(String) ->
    try
        Terms = scan_string(String),
        {ok, Terms}
    catch
        error:Reason -> {error, Reason}
    end.

You’ll notice that I have a specific scan_binary/1 and a scan_string/1 also. I haven’t played around with this enough yet to feel comfortable throwing a full-blown io_list() at this, so my assumption is that you’re either reading data in from a file and will have a binary to start with, or would have a string that arrives or is constructed somewhere internally and know that you should flatten it yourself before calling scan_string/1 or maybe_scan/1.

How did I arrive at this?

The larger problem I have had to solve just now is unpacking and reading in configuration data from a large number of tar archives that I receive over the wire. While I could unpack them to disk, then read the file I want with file:consult/1, it is dramatically faster to unpack only the file I wanted from the archive in memory (as the archive itself has never been written to disk anyway), and that leaves me with a binary string of the file contents, but nothing on which I can call file:consult/1. Dhoh!

My solution to that problem was the above. This function has done its work now and I don’t need it anymore, but it strikes me as not such a crazy situation for other programmers to run into at some point so I’m leaving this here for my future self. I’ll probably include this function in a future version of a convenience library, and at that point I will either refactor it to break down all the possible error returns in a proper way (crash reports from within list operations inside list comprehensions can be mysterious), or decide that the details of an error from, say, erl_scan would be more confusing than its worth and instead provide a more generic return from some interface like maybe_scan/1.

Trump on the DPRK: Exerting Maximal Regime Change Influence

Sitting within the target zone for a North Korean retaliation causes one to contemplate a bit on the state of things. Trump has doubled down on his bellicose rhetoric of “fire and fury” over the course of the last day, and quite a few people are flipping out, as anyone could have predicted. I have received several emails and calls from friends wishing me well if things go south, expressing hopes that various cabinet personalities can reel Trump in and so on.

All of this assumes Trump is nuts. That is far from an accurate portrayal of the situation.

Washington is faced with a very tough choice right now, but one that has only one real option available: Does Washington wait until American cities sit under nuclear threat from a country with a decision making apparatus that is only a single person deep (meaning, ultimately, the strike decision is left up to personal whim and intent), or does it sacrifice non-Americans to protect Americans?

Obviously, the choice is clear: risk Americans instead of risking Americans. To think that any other nation would do any differently is to believe we exist in a parallel universe where altruism reigns, feelings are reasonable goals of achievement and love conquers all. We do not live in that universe.

Let’s be clear: the US will not allow tens of millions of Americans to sit at risk of a North Korean leader who wishes to advance an extortion game against Washington. It will avert that by risking tens of thousands of foreign lives (mostly South Koreans, but also some Japanese and possibly Chinese as well). Even though I live within the zone that might get splatted, I really can’t see any other way for things to be — and let’s remember: this is tens of millions of American lives VS a few tens of thousands of foreigners from Washington’s perspective. Not much of a choice there, even if one is a hardcore humanitarian.

So now that we have established the American calculus, and we’re not deluding ourselves into thinking that management of a nuclear-armed, globally-strike capable North Korea is part of our menu of options, what is Trump going to do about this? How about get the Chinese or Russians to do something instead? Well, that route has already been explored and exhausted. The Chinese enjoy North Korea being a useful problem regionally, so do the South Koreans to some degree, the Russians love having the DPRK act as a consistent policy spoiler for everyone involved, and even the Japanese have leveraged the existence of North Korea from time to time. It was a useful problem for pretty much everyone for quite a long time, and that’s why it has been allowed to fester for so long.

But now things have gotten serious.

The US cannot wait longer than next spring to strike. The decision on exactly when to strike is dependent on weather, mostly. If the Americans believe that the advantage leans to their side in cold weather then we will see a strike sometime between late November and early March. If the advantage would go to the Americans in warmer months then we will see a strike sometime between now and December. Expect the US to ramp up a strike capability from now until whenever and just sit on it to mask the moment of their intent. Sure, nonessential being relocated from the American garrisons in South Korea would be a telltale sign, but I don’t know if Washington would even telegraph its intent that way rather than letting the chips fall where they may. This is serious business, after all. On the other hand, Washington may evacuate nonessential personnel right away and just remove that as an indicator all together very soon. Who knows.

Back to the rhetorical bit Trump threw out the other day and then doubled down on today…

Trump is doing everything but being explicit about his threat to either glass North Korea entirely or commit to a massive conventional strike that comes very close to that. Looking at Trump’s negotiating style since the 1980’s it is very likely that he intends to do exactly that if the situation does not improve — he is not known for bluffing. He also would not have made this decision alone. China has already stated that they would defend North Korea in the event of an American strike, so by elevating it to the level of an absolute conflict Trump is essentially guaranteeing that there would not be any chance for any action to escalate to becoming a regional war because there would not be a North Korea left to defend.

That sounds crazy, but it is not. It ensures a limited scope to the conflict from the start, and that is wise.

From the North Korean perspective, though, it does one more thing: it places every single leader and peasant and their families under threat of annihilation if Pyongyang does not change course in some way. The Chinese have been trying to effect a regime change in Pyongyang unsuccessfully for a few years now. Beijing can’t do it, it is very likely that nobody outside of North Korea can short of a war. Trump’s appeal to an absolute level of violence here is an overt signal to the North Koreans that it is up to them to effect regime change or face total annihilation. There is plenty of hidden opposition to Kim Jong Un in Pyongyang — but unless they feel that Trump is more dangerous to them than their own leader they are unlikely to feel motivated to move. After all, North Korea has had spats with the West hundreds of times over the last several decades — so often that there is almost a script for this sort of thing.

Trump is going off script. He is doing so to evoke a specific survival reaction in the upper leadership in Pyongyang, specifically a reaction against Kim Jong Un. This is probably the best chance anyone has of deposing him: turning his own leadership against him. They might die if they go against Kim Jong Un. They will certainly die if they go against Trump. This is how mutinies are made from the outside. On the outside chance that it comes to an American strike Trump has already guaranteed that a Chinese retaliation would be pointless. A massive strike (nuclear or conventional) would be a huge shock to the world, but the populations of the world are already experiencing hyperbolic rhetorical shock — when the volume has been turned up to 11 for so long there isn’t really anywhere left to go.

Trump is not crazy and his staff have certainly planned out (and are constantly revising) attack plans on North Korea designed to execute a strike devastating enough to limit the scope of any follow-on actions from anyone in the region. He has since moved on to working an influence play directly aimed at the North Korean leadership. This is how the game is played. People today are not used to being forced into situations where one bad option is balanced by an even worse one. Sometimes there is no unicorn to come save the day. The world is only going to turn more harsh in the coming decade — we probably will only remember this as a side show (if we even care to remember it at all).

Erlangers! USE LABELS! (aka “Stop Writing Punched-in-the-Face Code Blocks”)

Do you write lambdas directly inline in the argument list of various list functions or list comprehensions? Do you ever do it even though the fun itself, or the other arguments or return assignment/assertion for the call are too long and force you to scrunch that lambda’s definition up into an inline-multiline ball of wild shit? YOU DO? WTF?!?!? AHHHH!

First off, realize this makes you look like a douchebag for not being polite to other people or your future self whenever you do it. There is a big difference for the human reading between:

%%% From shitty_inline.erl

do_whatever(Keys, SomeParameter) ->
    lists:foreach(fun(K) -> case external_lookup(K) of
                  {ok, V} -> do_side_effecty_thing(V, SomeParameter);
                  {error, R} -> report_some_failure(R)
                end
          end, Keys
    ).

and

%%% From shitty_listcomp.erl

do_whatever(Keys, SomeParameter) ->
    [fun(K) -> case external_lookup(K) of
        {ok, V} -> do_side_effecty_thing(V, SomeParameter);
        {error, R} -> report_some_failure(R) end end(Key) || Key <- Keys],
    ok.

and

%%% From less_shitty_listcomp.erl

do_whatever(Keys, SomeParameter) ->
    ExecIfFound = fun(K) -> case external_lookup(K) of
            {ok, V} -> do_side_effecty_thing(V, SomeParameter);
            {error, R} -> report_some_failure(R)
        end
    end,
    [ExecIfFound(Key) || Key <- Keys],
    ok.

and

%%% From labeled_lambda.erl

do_whatever(Keys, SomeParameter) ->
    ExecIfFound =
        fun(Key) ->
            case external_lookup(Key) of
                {ok, Value}     -> do_side_effecty_thing(Value, SomeParameter);
                {error, Reason} -> report_some_failure(Reason)
            end
        end,
    lists:foreach(ExecIfFound, Keys).

and

%%% From isolated_functions.erl

-spec do_whatever(Keys, SomeParameter) -> ok
    when Keys          :: [some_kind_of_key()],
         SomeParameter :: term().

do_whatever(Keys, SomeParameter) ->
    ExecIfFound = fun(Key) -> maybe_do_stuff(Key, SomeParameter) end,
    lists:foreach(ExecIfFound, Keys).

maybe_do_stuff(Key, Param) ->
    case external_lookup(Key) of
        {ok, Value}     -> do_side_effecty_thing(Value, Param);
        {error, Reason} -> report_some_failure(Reason)
    end.

Which versions force your eyes to do less jumping around? How about which version lets you most naturally understand each component of the code independently? Which is more universal? What does code like this translate to after erlc has a go at it?

Are any of these difficult to read? No, of course not. Every version of this is pretty darn basic and common — you need a listy operation by require a closure over some in-scope state to make it work right, so you really do need a lambda instead of being able to look all suave with a fun some_function/1 type thing. So we agree, taken by itself, any version of this is easy to comprehend. But when you are reading through hundreds of these sort of things at once to understand wtf is going on in a project while also remembering a bunch of other shit code that is laying around and has side effects while trying to recall some detail of a standard while the phone is ringing… things change.

Do I really care which way you do it? In a toy case like this, no. In actual code I have to care about forever and ever — absolutely, yes I do. The fifth version is my definite preference, but the fourth will do just fine also.

(Or even the third, maybe. I tend to disagree with the semantic confusion of using a list comprehension to effect a loop over a list of values only for the side effects without returning a value – partly because this is semantically ambiguous, and also because whenever possible I like every expression of my code to either be an assignment or an assertion (so every line should normally have a = on it). In other words, use lists:foreach/2 in these cases, not a list comp. I especially disagree with using a listcomp when we the main utility of using a list comprehension is normally to achieve a closure over local state, but here we are just calling another closure — so semantic fail there, twice.)

But what about my lolspeed?!?

I don’t know, but let’s see. I’ve created five modules, based on the above examples:

  1. shitty_inline.erl
  2. shitty_listcomp.erl
  3. less_shitty_listcomp.erl
  4. labeled_lambda.erl
  5. isolated_functions.erl

These all call the same helpers that do basically nothing important other than having actual side effects when called (they call io:format/2). What we are interested in here is the generated assembler. What is the cost of introducing these labels that help the humans out VS leaving things all messy the way we imagine might be faster for the runtime?

It turns out that just like with using assignments to document your code, there is zero cost to label functions. For example, here is the assembler for shitty_inline.erl side-by-side with labeled_lambda.erl:

Oooh, look. The exact same stuff!

(This is a screenshot, a text file with the contents shown is here: label_example_comparison.txt)

See? All that annoying-to-read inline lambdaness buys you absolutely nothing. You’re not helping the compiler, you’re not helping the runtime, and you are hurting your future self and anyone you want to work with on the same code later. (Note: You can generate precompiler output with erlc -P and erlc -E, and assembler output with erlc -S. Here is the manpage. Play around with it a bit, BEAM and EVM are amazing platforms, wide open for exploration!)

So use labels.

As for execution speed… all of these perform basically the same, except for the last one, isolated_functions.erl. Here is the assembler for that one: isolated_functions.S. This outperforms the others, though to a relatively insignificant degree. Of course, it is only an “insignificant degree” until that part of the program is the most critical part of whatever your program does — then even a 10% difference may be a really huge win for you. In those cases it is worth it to refactor to test the speed of different representations against each version of the runtime you happen to be using — and all thoughts on mere style have to take a backseat. But this is never the case for the vast majority of our code.

(I’ve read reports in the past that indicate 99% of our performance bottlenecks tend to reside in less than 1% of our code by line count — but I can’t recall the names of any just now. If you happen to find a reference, let me know so I can update this little parenthetical blurb with some hard references.)

My point here is that breaking every lambda out into a separate named functions isn’t always worth it — sometimes an in-place lambda really is more idiomatic and easier to understand simply because you can see everything right there in the same function body. What you don’t want to see is multi-line lambdas squashed into argument lists that make things hard to read and give you the exact same result once compiled as labeling that lambda with a meaningful variable name on another line in the code and then referring to it where it is invoked later.