Horrible, Drunk Code (but it works and demonstrates a point)

Over on StackOverflow otopolsky was asking about how to make an Erlang program that could read selected lines in a huge file, offset from the bottom, without exploding in memory (too hard).

I mentioned the standard bit about front-loading and caching the work of discovering the linebreak locations, the fact that “a huge text file” nearly always means “a set of really huge log files” and that in this case tokenizing the semantics of the log file within a database is a Good Thing, etc. (my actual answer is here).

He clearly knew most of this, and was hoping that there was some shortcut already created. Well, I don’t know that there is, but it bothered me that his initial stab at following my advice about amelioration of the linebreaks resulted in an attempt to read a 400MB text file in to run a global match over it, and that this just ate up all his memory and made his machine puke.

400MB? Eating ALL your memory? NO WAY. We have to do something about this!

The main problem is I’m already a bit jeezled up because my wife broke out some 泡盛 earlier (good business day today)… so any demo code I will produce will be, ahem, a little less than massive-public-display worthy (not that the 6 or 7 guys on the planet who actually browse Erlang questions on SO would care, or don’t already know who I am). So… I’m posting here:

-module(goofy).
-export([linebreaks/1]).

linebreaks(File) ->
    {ok, FD} = file:open(File, [raw, read_ahead, binary]),
    Step = 1000,
    Loc = 1,
    Read = file:read(FD, Step),
    {ok, Result} = index(FD, Read, 1, Loc, Step, []),
    ok = file:close(FD),
    [{1, Loc} | Result].

index(FD, {ok, Data}, Count, Loc, Step, Indexes) ->
    NewLines = binary:matches(Data, <<$\n>>),
    {NewCount, Found} = indexify(NewLines, Loc, Count, []),
    Read = file:read(FD, Step),
    index(FD, Read, NewCount, Loc + Step, Step, [Found | Indexes]);
index(_, eof, _, _, _, Indexes) ->
    {ok, lists:reverse(lists:flatten(Indexes))};
index(_, {error, Reason}, _, _, _, Indexes) ->
    {error, Reason, Indexes}.

indexify([], _, Count, Indexed) ->
    {Count, Indexed};
indexify([{Pos, Len} | Rest], Offset, Count, Indexed) -> 
    NewCount = Count + 1,
    indexify(Rest, Offset, NewCount, [{Count, Pos + Len + Offset} | Indexed]).

As ugly as that is, it actually runs in constant space and the index list produced on a 7,247,560 line 614,754,920 byte file appears to take a bit of space (a few dozen MB for the 7,247,560 element list…), and temporarily requires a bit more space during part of the return operation (very sudden, brief spike in memory usage right at the end as it returns). But it works, and returns what we were looking for in a way that won’t kill your computer. And… it only takes a 14 seconds or so on the totally crappy laptop I’m using right now (an old dual-core Celeron).

Much better than what otopolsky ran into when his computer ran for 10 minutes before it started swapping after eating all his memory!

Results:

lceverett@changa:~/foo/hugelogs$ ls -l
合計 660408
-rw-rw-r-- 1 ceverett ceverett       928  9月  3 01:31 goofy.erl
-rw-rw-r-- 1 ceverett ceverett  61475492  9月  2 23:17 huge.log
-rw-rw-r-- 1 ceverett ceverett 614754920  9月  2 23:19 huger.log
ceverett@changa:~/foo/hugelogs$ erl
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false]

Running $HOME/.erlang.
Eshell V7.0  (abort with ^G)
1> c(goofy).
{ok,goofy}
2> {HugeTime, HugeIndex} = timer:tc(goofy, linebreaks, ["huge.log"]).
{1404370,
 [{1,1},
  {2,119},
  {3,220},
  {4,...},
  {...}|...]}
3> {HugerTime, HugerIndex} = timer:tc(goofy, linebreaks, ["huger.log"]).
{14245673,
 [{1,1},
  {2,119},
  {3,220},
  {4,...},
  {...}|...]}
4> HugerTime / 1000000.
14.245673
5> HugeTime / 1000000.
1.40437
6> lists:last(HugeIndex).
{724757,61475493}
7> lists:last(HugerIndex).
{7247561,614754921}

Rather untidy code up there, but I figure it is readable enough that otopolsky can get some ideas from this and move on.

ザ自転車 v3.0

たまに人が「この前の自転車をボロボロした後なんの自転車を買ったの?」と聞く。どうしても自転車の写真を撮るのを忘れる!

それで、後の参考の為にこれで写真を張る(child seat付き)。

bike3

Vicarious Reminiscence

Remember being 4-years-old being taught by your parents how to read? Just a vague memory for most of us, but an ongoing process for my own kids today…

AnnasNoteToday, some Sanrio stationary; tomorrow, THE WORLD! (…or more likely some other blog in the digital wasteland, but we all have to start somewhere.)

XML: Xtensively Mucked-up Lists (or “How A Committee Screwed Up Sexps”)

Some folks are puzzled at why I avoid XML. They ask why I “try so hard to avoid XML” and do crazy things like write ASN.1 specs, use native language terms when possible (like Python config files in Python dicts, Erlang configs in Erlang terms, etc.), consider YAML/JSON a decent last resort, and regard XML to a non-option.

I maintain that XML sucks. I believe that it is, to date, the most perfectly subtle (or not-so-subtle) way to screw up sexps. Consider that: “screw up sexps”. What a thing to say! How in the world would one even propose to screw up such a simple idea? Let’s consider an example…

Can you identify the semantic difference among the following examples?
(Inspired by the sample XML in the Python xml.etree docs)

Verson 1

<country name="Liechtenstein">
  <rank>1</rank>
  <year>2008</year>
  <gdppc>141100</gdppc>
  <neighbor name="Austria" direction="E"/>
  <neighbor name="Switzerland" direction="W"/>
</country>

Version 2

<country>
  <name>Liechtenstein</name>
  <rank>1</rank>
  <year>2008</year>
  <gdppc>141100</gdppc>
  <neighbor>
    <name>Austria</name>
    <direction>E</direction>
  </neighbor>
  <neighbor>
    <name>Switzerland</name>
    <direction>W</direction>
  <neighbor>
</country>

Version 3

<country name="Liechtenstein" rank="1" year="2008" gdppc="141100">
  <neighbor name="Austria" direction="E"/>
  <neighbor name="Switzerland" direction="W"/>
</country>

Version 4

And here there is a deliberate semantic difference, meant to be illustrative of a certain property of trees… which is supposedly the whole point.

<entries>
  <country rank="1" year="2008" gdppc="141100">
    <name>Liechtenstein</name>
    <neighbors>
      <name direction="E">Austria</name>
      <name direction="W">Switzerland</name>
    </neighbors>
  </country>
</entries>

Which one should you choose for your application? Which one is obvious to a parser? From which could you more than likely write a general parsing routine that could pull out data that meant something? Which one could you turn into a program by defining the identifier tags as functions somewhere?

Consider that last two questions carefully. The so-called “big data” people are hilarious, especially when they are XML people. There is a difference between “not a large enough sample to predict anything specific” and “a statistically significant sample from which generalities can be derived”, certainly, but that has a lot more to do with representative sample data than the sheer number of petabytes you have sitting on disk somewhere. “Big Data” should really be about “Big Meaning”, but we seem to have all missed the boat on that. The reason I so hate XML is because the complexity and ambiguity introduced in an effort to make the X in XML mean something has made meanings a lot less clear to the people who have to parse (or design, document, discuss, edit, etc.) XML schemas.

Erlang: Writing Terms to a File for file:consult/1

I notice that there are a few little helper functions I seem to always wind up writing given different contexts. In Erlang one of these is an inverse function for file:consult/1, which I have to write any time I use a text file to store config data*.

Very simply:

write_terms(Filename, List) ->
    Format = fun(Term) -> io_lib:format("~tp.~n", [Term]) end,
    Text = lists:map(Format, List),
    file:write_file(Filename, Text).

[Note that this *should* return the atom 'ok' — and if you want to check and assertion or crash on failure, you want to do ok = write_terms(Filename, List) in your code.]

This separates each term in a list by a period in the text file, which causes file:consult/1 to return the same list back (in order — though this detail usually does not matter because most conf files are used as proplists and are keysearched anyway).

An annoyance with most APIs is a lack of inverse functions where they could easily be written. Even if the original authors of the library don’t conceive of a use for an inverse of some particular function, whenever there is an opportunity for this leaving it out just makes an API feel incomplete (and don’t get me started on “web APIs”… ugh). This is just one case of that. Why does Erlang have a file:consult/1 but not a file:write_terms/2 (or “file:deconsult/2” or whatever)? I don’t know. But this bugs me in most libs in most languages — this is the way I usually deal with this particular situation in Erlang.

[* term_to_binary/1 ←→ binary_to_term/1 is not an acceptable solution for config data!]

Erlang: Maps, Comprehensions and Side-effecty Iteration

In Erlang it is fairly common to want to perform a side-effecty operation over a list of values, not because you want to collect an aggregate (fold), actually map the input list to the output list (map), or build a new list in some way (list comprehension) but because you just want the side-effects of the operation.

The typical idiom (especially for broadcasting messages to a list of processes) is to use a list comprehension for this, or sometimes lists:map/2:

%% Some procedural side-effect, like spawning list of processes or grabbing
%% a list of resources:
[side_effecty_procedure(X) || X <- ListOfThings],

%% Or, for broadcasting:
[Pid ! SomeMessage || Pid <- ListOfPids],

%% And some old farts still do this:
lists:map(fun side_effecty_procedure/1, ListOfThings),

%% Or even this (gasp!) which is actually made for this sort of thing:
lists:foreach(fun side_effecty_procedure/1, ListOfThings),
%% but lacks half of the semantics I describe below, so this part
%% of the function namespace is already taken... (q.q)

That works just fine, and is so common that list comprehensions have been optimized to handle this specific situation in a way that avoids creating a return list value if it is clearly not going to be assigned to anything. I remember thinking this was sort of ugly, or at least sort of hackish before I got accustomed to the idiom, though. “Hackish” in the sense that this is actually a syntax intended for the construction of lists and only incidentally a useful way to write a fast side-effect operation over a list of values, and “ugly” in the sense that it is one of the few places in Erlang you can’t force an assertion to check the outcome of a side-effecty operation.

For example, there is no equivalent to the assertive ok = some_procedure() idiom, or even the slightly more complex variation used in some other situations:

case foo(Bar) of
    ok    -> Continue();
    Other -> Other
end,

a compromise could be to write an smap/2 function defined something like

smap(F, List) ->
    smap(F, 1, List).

smap(F, N, []) -> ok;
smap(F, N, [H|T]) ->
    case F(H) of
        ok              -> smap(F, N + 1, T);
        {error, Reason} -> {error, {N, Reason}}
    end.

But this now has the problem of requiring that whatever is passed as F/1 have a return type of ok | {error, Reason} which is unrealistic without forcing a lot of folks to wrap existing side-effecty functions in something that coerces the return type to match this. Though that might not be bad practice ultimately, its still perhaps more trouble than its worth.

It isn’t like most Erlang code is written in a way where side-effecty iteration over a list is likely to fail, and if it actually does fail the error data from a crash will contain whatever was passed to the side-effecty function that failed. But this still just doesn’t quite sit right with me — that leaves the prospect of list iteration in the interest of achieving a series of side effects as at least a little bit risky to do in the error kernel (or “crash kernel”) of an application.

On the other hand, the specific case of broadcasting to a list of processes would certainly be nice to handle exactly the same way sending to a single process works:

%% To one message
Pid ! SomeMessage,
%% To a list of messages
Pids ! SomeMessage,

Which seems particularly obvious syntax, considering that the return of the ! form of send/2 is the message itself, meaning that the following two would be equivalent if the input to the ! form of send/2 also accepted lists:

%% The way it works now
Pid1 ! Pid2 ! SomeMessage,
%% Another way I wish it worked
[Pid1, Pid2] ! SomeMessage,

In any case, this is clearly not the sort of language wart that gets much attention, and its never been a handicap for me. It just seems a bit hackish and ugly to essentially overload the semantics of list comprehensions or (ab)use existing list operations like maps and folds to achieve the side effect of iterating over a list instead of having a function like smap/2 which is explicitly designed to achieve side-effecty iteration.

OpenSSL RSA DER public key PKCS#1 OID header madness

(Wow, what an utterly unappealing post title…)

I have run into an annoying issue with the DER output of RSA public keys in OpenSSL. The basic problem is that OpenSSL adds an OID header to its ASN.1 DER output, but other tools are not expecting this (be it iOS, a keystore in Java, the iPhone keystore, or Erlang’s public_key module).

I first noticed this when experiencing decode failures in Erlang trying to use the keys output by an openssl command sequence like:

openssl genpkey \
    -algorithm rsa \
    -out $keyfile \
    -outform DER \
    -pkeyopt rsa_keygen_bits:8192

openssl rsa \
    -inform DER \
    -in $keyfile \
    -outform DER \
    -pubout \
    -out $pubfile

Erlang’s public_key:der_decode(‘RSAPrivateKey’, KeyBin) would give me the correct #’RSAPrivateKey’ record but public_key:der_decode(‘RSAPublicKey’, PubBin) would choke and give me an asn1 parse error (ML thread I posted about this). Of course, OpenSSL is expecting the OID header, so it works fine there, just not anywhere else.

Most folks probably don’t notice this, though, because the primary use case for most folks is either to use OpenSSL directly to generate and then use keys, use a tool that calls OpenSSL through a shell to do the same, or use a low-level tool to generate the keys and then use the same library to use the keys. Heaven forbid you try to use an OpenSSL-generated public key in DER format somewhere other than OpenSSL, though! (Another reason this sort of thing usually doesn’t cause problems is that almost nobody fools around with crypto stuff in their tools to begin with, leaving this sort of thing as an issue far to the outside of mainstream hackerdom…)

Of course, the solution could be to chop off the header, which happens to be 24-bits long:

1> {ok, OpenSSLBin} = file:read_file("rsa3.pub.der.openssl").
{ok,<<48,130,4,34,48,13,6,9,42,134,72,134,247,13,1,1,1,5,
      0,3,130,4,15,0,48,130,4,...>>}
2> {ok, ErlangBin} = file:read_file("rsa3.pub.der.erlang").
{ok,<<48,130,4,10,2,130,4,1,0,202,167,130,153,242,77,196,
      252,167,142,159,17,13,69,148,41,161,50,...>>}
3> <<_:24/binary, ChoppedBin/binary>> = OpenSSLBin.
<<48,130,4,34,48,13,6,9,42,134,72,134,247,13,1,1,1,5,0,3,
  130,4,15,0,48,130,4,10,2,...>>
4> ChoppedBin = ErlangBin.
<<48,130,4,10,2,130,4,1,0,202,167,130,153,242,77,196,252,
  167,142,159,17,13,69,148,41,161,50,44,138,...>

But… that’s pretty darn arbitrary. It turns out the header is always:

<<48,130,4,34,48,13,6,9,42,134,72,134,247,13,1,1,1,5,0,3,130,4,15,0>>

I could match on that, but it still feels weird because that particular binary string just doesn’t mean anything to me, so matching on it is still the same hack. I could, of course, go around that by writing just the public key as PEM, loading it, encoding it to DER, and saving that as the DER file from within Erlang (thus creating a same-tool-to-same-tool situation). But that’s goofy too: PEM is just a base64 encoded DER binary wrapped in a text header/footer! Its completely arbitrary that PEM should work but DER shouldn’t!

-module(keygen).
-export([start/1]).

start([Prefix]) ->
    PemFile = string:concat(Prefix, ".pub.pem"),
    KeyFile = string:concat(Prefix, ".key.der"),
    PubFile = string:concat(Prefix, ".pub.der"),
    {ok, PemBin} = file:read_file(PemFile),
    [PemData] = public_key:pem_decode(PemBin),
    Pub = public_key:pem_entry_decode(PemData),
    PubDer = public_key:der_encode('RSAPublicKey', Pub),
    ok = file:write_file(PubFile, PubDer),
    io:format("Wrote private key to: ~ts.~n", [KeyFile]),
    io:format("Wrote public key to:  ~ts.~n", [PubFile]),
    case check(KeyFile, PubFile) of
        true  ->
            ok = file:delete(PemFile),
            io:format("~ts and ~ts agree~n", [KeyFile, PubFile]),
            init:stop();
        false ->
            io:format("Something has gone wrong.~n"),
            init:stop(1)
    end.

check(KeyFile, PubFile) ->
    {ok, KeyBin} = file:read_file(KeyFile),
    {ok, PubBin} = file:read_file(PubFile),
    Key = public_key:der_decode('RSAPrivateKey', KeyBin),
    Pub = public_key:der_decode('RSAPublicKey', PubBin),
    TestMessage = <<"Some test data to sign.">>,
    Signature = public_key:sign(TestMessage, sha512, Key),
    public_key:verify(TestMessage, sha512, Signature, Pub).

This is so silly its maddening — and apparently folks from the iOS, PHP and Java worlds have essentially built themselves hacks to handle DER keys that deal directly with this.

I finally found a way that is both semantically meaningful (well, somewhat) and is sure to either generate a proper PKCS#1 DER public RSA key or fail telling me that its looking at badly-formed ASN.1 (or binary trash): the magical command “openssl asn1parse -strparse [offset]”.

A key generation script therefore looks something like this:

#! /bin/bash

prefix=${1:?"Cannot proceed without a file prefix."}

keyfile="$prefix"".key.der"
pubfile="$prefix"".pub.der"

# Generate 8192 RSA key
openssl genpkey \
    -algorithm rsa \
    -out $keyfile \
    -outform DER \
    -pkeyopt rsa_keygen_bits:8192

# OpenSSL's PKCS#1 ASN.1 output adds a 24-byte header that
# other tools (Erlang, iOS, Java, etc.) choke on, so we clip
# the OID header off with asn1parse.
openssl rsa \
    -inform DER \
    -in $keyfile \
    -outform DER \
    -pubout \
| openssl asn1parse \
    -strparse 24 \
    -inform DER \
    -out $pubfile

The output on stdout will look something like:

$ openssl asn1parse -strparse 24 -inform DER -in rsa3.pub.der.openssl 
    0:d=0  hl=4 l=1034 cons: SEQUENCE          
    4:d=1  hl=4 l=1025 prim: INTEGER           :CAA78299F24DC4FCA78E9F110D459429A1322C8AF0BF558B7E49B21A30D5EFDF0FFF512C15E5292CD85209EBB21861235250ABA3FBD48F0830CC3F355DDCE5BA467EA03F58F91E12985E54336309D8F954525802949FA68A5A742B4933A5F26F148B912CF60D5A140C52C2E72D1431FA2D40A3E3BAFA8C166AF13EBAD774E7FEB17CAE94EB12BE97F3CABD668833691D2A2FB3001BF333725E65081F091E46597028C3D50ABAFAF96FA2DF16E6AEE2AE4B8EBF4360FBF84C1312001A10056AF135504E02247BD16C5A03C5258139A76D3186753A98B7F8E7022E76F830863536FA58F08219C798229A23D0DDB1A0B57F1A4CCA40FE2E5EF67731CA094B03CA1ADF3115D14D155859E4D8CD44F043A2481168AA7E95CCC599A739FF0BB037969C584555BB42021BDB65C0B7EF4C6640CCE89A860D93FD843FE77974607F194206462E15B141A54781A5867557D8A611D648F546B72501E5EAC13A4E01E8715DFE0B8211656CFA67F956BC93EA7FB70D4C4BCFEC764128EAE8314F15F2473FCF4C6EEA29299D0B13C2CCC11A73BF58209A056CB44985C81504013529C75B6563CED3CC27988C70733080B01F78726A3ABBCD8765538D515D2282EF79F4818A807A9173CC751E46BDB930E87F8DA64C6ABDD8127900D94A20EE87F435716F4871463854AD0B7443243BDA0682F999D066EA213952F296089DA4577B7B313D536185E7C37E68F87D43A53B37F1E0FB7969760F31C291FDB99F63F6010E966EC418A10EFE7D4EBD4B494693965412F0E78150150B61A4FF84AB26355FC607697B86BFB014E6450793D6BF5EA0565C63161D670CD64E99FFD9AE9CCF90C6F77949A137E562E08787D870604825EF955B3A26F078CBA5E889F9AFEEF48FE7F36A51D537E9ADF0746EA81438A91DF088C6C048622AC372AEEBD11A2427DFCBC2FE163FD62BE3DCBDC4961ECD685CBAD55898B22E07A8C7E3FB4AEA8AC212F9CA6D6FE522EC788FEDD08E403BD70D1ABEDE03B760057920B27B70CBDCB3C1977A7B21B5CD5AF3B7D57A0B2003864C4A56686592CBFC1BBAF746E35937A3587664A965B806C06C94EC78119A8F3BB18BCFCEB1E5193C0EFD025A86AD6C2AE81551475A14B81D5A153E993612E1D7AED44AB38E9A4915D4A9B3A3B4B2DF2C05C394FC65B46E3983EA0F951B04B7558099DB04E73F550D08A453020EFD7E1A4F2383C90E8F92A2A44CC03A5E0B4F024656741464D86E89EA66526A11193435EB51AED58438AA73A4A74ABF3CDE1DE645D55A09E33F3408AD340890A72D2C6E4958669C54B87B7C146611BFD6594519CDC9D9D76DF4D5F9D391F4A8D851FF4DAE5207585C0A544A817801868AA5106869B995C66FB45D3C27EE27C078084F58476A7286E9EE5C1EF76A6CF17F8CDD829EFBB0C5CED4AD3D1C2F101AB90DC68CEA93F7B891B217
 1033:d=1  hl=2 l=   3 prim: INTEGER           :010001

And if we screw it up and don’t actually align with the ASN.1 header properly things explode:

$ openssl asn1parse -strparse 32 -inform DER -in rsa3.pub.der.openssl 
Error parsing structure
140488074528416:error:0D07207B:asn1 encoding routines:ASN1_get_object:header too long:asn1_lib.c:150:
140488074528416:error:0D068066:asn1 encoding routines:ASN1_CHECK_TLEN:bad object header:tasn_dec.c:1306:
140488074528416:error:0D06C03A:asn1 encoding routines:ASN1_D2I_EX_PRIMITIVE:nested asn1 error:tasn_dec.c:814:

Now my real question is… why isn’t any of this documented? This was a particularly annoying issue to work my way around, has obviously affected others, and yet is obscure enough that almost no mention of it can be found in documentation anywhere. GHAH!

Why OTP? Why “pure” and not “raw” Erlang?

I’ve been working on my little instructional project for the last few days and today finally got around to putting a very minimal, but working, chat system into the ErlMUD “scaffolding” code. (The commit with original comment is here. Note the date. By the time this post makes its way into Google things will probably be a lot different.)

I commented the commit on GitHub, but felt it was significant enough to reproduce here (lightly edited and linked). The state of the “raw Erlang” ErlMUD codebase as of this commit is significant because it clearly demonstrates the need for many Erlang community conventions, and even more significantly why OTP was written in the first place. Not only does it demonstrate the need for them, the non-trivial nature of the problem being handled has incidentally given rise to some very clear patterns which are already recognizable as proto-OTP usage patterns (without the important detail of having written any behaviors just yet). Here is the commit comment:

Originally chanman had been written to monitor, but not link or trap exits of channel processes [example]. At first glance this appears acceptable, after all the chanman doesn’t have any need to restart channels since they are supposed to die when they hit zero participants, and upon death the participant count winds up being zero.

But this assumes that the chanman itself will never die. This is always a faulty assumption. As a user it might be mildly inconvenient to suddenly be kicked from all channels, but it isn’t unusual for chat services to hiccup and it is easy to re-join whatever died. Resource exhaustion and an inconsistent channel registry is worse. If orphaned channels are left lying about the output of \list can never match reality, and identically named ones can be created in ways that don’t make sense. Even a trivial chat service with a tiny codebase like this can wind up with system partitions and inconsistent states (oh no!).

All channels crashing with the chanman might suck a little, but letting the server get to a corrupted state is unrecoverable without a restart. That requires taking the game and everything else down with it just because the chat service had a hiccup. This is totally unacceptable. Here we have one of the most important examples of why supervision trees matter: they create a direct chain of command, and enforce a no-orphan policy by annihilation. Notice that I have been writing “managers” not “supervisors” so far. This is to force me to (re)discover the utility of separating the concepts of process supervision and resource management (they are not the same thing, as we will see later).

Now that most of the “scaffolding” bits have been written in raw Erlang it is a good time to sit back and check out just how much repetitive code has been popping up all over the place. The repetitions aren’t resulting from some mandatory framework or environment boilerplate — I’m deliberately making an effort to write really “low level” Erlang, so low that there are no system or framework imposed patterns — they are resulting from the basic, natural fact that service workers form constellations of similarly defined processes and supervision trees provide one of the only known ways to guarantee fallback to a known state throughout the entire system without resorting to global restarts.

Another very important thing to notice is how inconsistent my off-the-cuff implementation of several of these patterns has been. Sometimes a loop has a single State variable that wraps the state of a service, sometimes bits are split out, sometimes it was one way to begin with and switched a few commits ago (especially once the argument list grew long enough to annoy me when typing). Some code_change/N functions have flipped back and forth along with this, and that required hand tweaking code that really could have been easier had every loop accepted a single wrapped State (or at least some standard structure that didn’t change every time I added something to the main loop without messing with code_change). Some places I start with a monitor and wind up with a link or vice versa, etc.

While the proper selection of OTP elements is more an art than a science in many cases, having commonly used components of a known utility already grouped together avoids the need for all this dancing about in code to figure out just what I want to do. I suppose the most damning point about all this is that none of the code I’ve been flip-flopping on has been essential to the actual problem I’m trying to solve. I didn’t set out to write a bunch of monitor or link or registry management code. The only message handler I care about is the one that sends a chat message to the right people. Very little of my code has been about solving that particular problem, and instead I consumed a few hours thinking through how I want the system to support itself, and spent very little time actually dealing with the problem I wanted to treat. Of course, writing this sort of thing without the help of any external libraries in any other language or environment I can think of would have been much more difficult, but the commit history today is a very strong case for making an effort to extract the common patterns used and isolate them from the actual problem solving bits.

The final thing to note is something I commented on a few commits ago, which is just how confusing tracing message passage can be when not using module interface functions. The send and receive locations are distant in the code, so checking for where things are sent from and where they are going to is a bit of a trick in the more complex cases (and fortunately none of this has been particularly complex, or I probably would have needed to write interface functions just to get anything done). One of the best things about using interface functions is the ability to glance at them for type information while working on other modules, use tools like Dialyzer (which we won’t get into we get into “pure Erlang” in v0.2), and easily grep or let Emacs or an IDE find calling sites for you. This is nearly impossible with pure ad hoc messaging. Ad hoc messaging is fine when writing a stub or two to test a concept, but anything beyond that starts getting very hard to keep track of, because the locations significant to the message protocol are both scattered about the code (seemingly at random) and can’t be defined by any typing tools.

I think this code proves three things:

  • Raw Erlang is amazingly quick for hacking things together that are more difficult to get right in other languages, even when writing the “robust” bits and scaffolding without the assistance of external libraries or applications. I wrote a robust chat system this afternoon that can be “hot” updated, from scratch, all by hand, with no framework code — that’s sort of amazing. But doing it sucked more than it needed to since I deliberately avoided adhering to most coding standards, but it was still possible and relatively quick. I wouldn’t want to have to maintain this two months from now, though — and that’s the real sticking point if you want to write production code.
  • Code convention recommendations from folks like Joe Armstrong (who actually does a good bit of by-hand, pure Erlang server writing — but is usually rather specific about how he does it), and standard set utilities like OTP exists for an obvious reason. Just look at the mess I’ve created!
  • Deployment clearly requires a better solution than this. We won’t touch on this issue for a while yet, but seriously, how in the hell would you automate deployment of a scattering of files like this?

The economy of good intentions

This is a story about four people: Everybody, Somebody, Anybody and Nobody.
There was an important job to be done and Everybody was asked to do it.
Everybody was sure that Somebody would do it.
Anybody could have done it, but Nobody did.
Somebody got angry because it was Everybody’s job.
Everybody knew that Anybody could do it, but Nobody realised that Somebody wouldn’t do it.
And Everybody blamed Somebody when Nobody did what Anybody could have done.