Erlangers! USE LABELS! (aka “Stop Writing Punched-in-the-Face Code Blocks”)

Do you write lambdas directly inline in the argument list of various list functions or list comprehensions? Do you ever do it even though the fun itself, or the other arguments or return assignment/assertion for the call are too long and force you to scrunch that lambda’s definition up into an inline-multiline ball of wild shit? YOU DO? WTF?!?!? AHHHH!

First off, realize this makes you look like a douchebag for not being polite to other people or your future self whenever you do it. There is a big difference for the human reading between:

%%% From shitty_inline.erl

do_whatever(Keys, SomeParameter) ->
    lists:foreach(fun(K) -> case external_lookup(K) of
                  {ok, V} -> do_side_effecty_thing(V, SomeParameter);
                  {error, R} -> report_some_failure(R)
                end
          end, Keys
    ).

and

%%% From shitty_listcomp.erl

do_whatever(Keys, SomeParameter) ->
    [fun(K) -> case external_lookup(K) of
        {ok, V} -> do_side_effecty_thing(V, SomeParameter);
        {error, R} -> report_some_failure(R) end end(Key) || Key <- Keys],
    ok.

and

%%% From less_shitty_listcomp.erl

do_whatever(Keys, SomeParameter) ->
    ExecIfFound = fun(K) -> case external_lookup(K) of
            {ok, V} -> do_side_effecty_thing(V, SomeParameter);
            {error, R} -> report_some_failure(R)
        end
    end,
    [ExecIfFound(Key) || Key <- Keys],
    ok.

and

%%% From labeled_lambda.erl

do_whatever(Keys, SomeParameter) ->
    ExecIfFound =
        fun(Key) ->
            case external_lookup(Key) of
                {ok, Value}     -> do_side_effecty_thing(Value, SomeParameter);
                {error, Reason} -> report_some_failure(Reason)
            end
        end,
    lists:foreach(ExecIfFound, Keys).

and

%%% From isolated_functions.erl

-spec do_whatever(Keys, SomeParameter) -> ok
    when Keys          :: [some_kind_of_key()],
         SomeParameter :: term().

do_whatever(Keys, SomeParameter) ->
    ExecIfFound = fun(Key) -> maybe_do_stuff(Key, SomeParameter) end,
    lists:foreach(ExecIfFound, Keys).

maybe_do_stuff(Key, Param) ->
    case external_lookup(Key) of
        {ok, Value}     -> do_side_effecty_thing(Value, Param);
        {error, Reason} -> report_some_failure(Reason)
    end.

Which versions force your eyes to do less jumping around? How about which version lets you most naturally understand each component of the code independently? Which is more universal? What does code like this translate to after erlc has a go at it?

Are any of these difficult to read? No, of course not. Every version of this is pretty darn basic and common — you need a listy operation by require a closure over some in-scope state to make it work right, so you really do need a lambda instead of being able to look all suave with a fun some_function/1 type thing. So we agree, taken by itself, any version of this is easy to comprehend. But when you are reading through hundreds of these sort of things at once to understand wtf is going on in a project while also remembering a bunch of other shit code that is laying around and has side effects while trying to recall some detail of a standard while the phone is ringing… things change.

Do I really care which way you do it? In a toy case like this, no. In actual code I have to care about forever and ever — absolutely, yes I do. The fifth version is my definite preference, but the fourth will do just fine also.

(Or even the third, maybe. I tend to disagree with the semantic confusion of using a list comprehension to effect a loop over a list of values only for the side effects without returning a value – partly because this is semantically ambiguous, and also because whenever possible I like every expression of my code to either be an assignment or an assertion (so every line should normally have a = on it). In other words, use lists:foreach/2 in these cases, not a list comp. I especially disagree with using a listcomp when we the main utility of using a list comprehension is normally to achieve a closure over local state, but here we are just calling another closure — so semantic fail there, twice.)

But what about my lolspeed?!?

I don’t know, but let’s see. I’ve created five modules, based on the above examples:

  1. shitty_inline.erl
  2. shitty_listcomp.erl
  3. less_shitty_listcomp.erl
  4. labeled_lambda.erl
  5. isolated_functions.erl

These all call the same helpers that do basically nothing important other than having actual side effects when called (they call io:format/2). What we are interested in here is the generated assembler. What is the cost of introducing these labels that help the humans out VS leaving things all messy the way we imagine might be faster for the runtime?

It turns out that just like with using assignments to document your code, there is zero cost to label functions. For example, here is the assembler for shitty_inline.erl side-by-side with labeled_lambda.erl:

Oooh, look. The exact same stuff!

(This is a screenshot, a text file with the contents shown is here: label_example_comparison.txt)

See? All that annoying-to-read inline lambdaness buys you absolutely nothing. You’re not helping the compiler, you’re not helping the runtime, and you are hurting your future self and anyone you want to work with on the same code later. (Note: You can generate precompiler output with erlc -P and erlc -E, and assembler output with erlc -S. Here is the manpage. Play around with it a bit, BEAM and EVM are amazing platforms, wide open for exploration!)

So use labels.

As for execution speed… all of these perform basically the same, except for the last one, isolated_functions.erl. Here is the assembler for that one: isolated_functions.S. This outperforms the others, though to a relatively insignificant degree. Of course, it is only an “insignificant degree” until that part of the program is the most critical part of whatever your program does — then even a 10% difference may be a really huge win for you. In those cases it is worth it to refactor to test the speed of different representations against each version of the runtime you happen to be using — and all thoughts on mere style have to take a backseat. But this is never the case for the vast majority of our code.

(I’ve read reports in the past that indicate 99% of our performance bottlenecks tend to reside in less than 1% of our code by line count — but I can’t recall the names of any just now. If you happen to find a reference, let me know so I can update this little parenthetical blurb with some hard references.)

My point here is that breaking every lambda out into a separate named functions isn’t always worth it — sometimes an in-place lambda really is more idiomatic and easier to understand simply because you can see everything right there in the same function body. What you don’t want to see is multi-line lambdas squashed into argument lists that make things hard to read and give you the exact same result once compiled as labeling that lambda with a meaningful variable name on another line in the code and then referring to it where it is invoked later.

The most basic Erlang service ⇒ worker pattern

There has been some talk about identifying “Erlang design patterns” or “functional design patterns”. The reason this sort of talk rarely gets very far (just refer to any of the thousands of aborted ML and forums threads on the subject) is because generally speaking “design patterns” is a phrase that means “things you have to do all the time that your language provides both no primitives to represent, and no easy way to write a library function behind which to hide an abstract implementation”. OOP itself, being an entire paradigm built around a special syntax for writing dispatching closures, tends to lack a lot of primitives we want to represent today and has a litany of design patterns.

NOTE: This is a discussion of a very basic Erlang implementation pattern, and being very basic it also points out a few places new Erlangers get hung up on, like what context a specific call is made in — because that’s just not obvious if you’re not already familiar with concurrency at the level Erlang does it. If you’re already a wizard, this article probably isn’t for you.

But what about Erlang? Why have so few design patterns (almost none?) emerged here?

The main reason is what would have been design patterns in Erlang have mostly become either functional abstractions or OTP (“OTP” in this use generally referring to the framework that is shipped with Erlang). This is about as far as the need for patterns has needed to go in the most general case. (Please note that it very often is possible to write a framework that implements a pattern, though it is very difficult to make such frameworks completely generic.)

But there is one thing the ole’ Outlaw Techno Psychobitch doesn’t do for us that quite a few of us do have a common need for but we have to discover for ourselves: how to create a very basic arrangement of service processes, supervisors, and workers that spawn workers according to some ongoing global state or node configuration. (Figuring this out is almost like a rite of passage for Erlangers.)

The case I will describe below involves two things:

  • There is some service you want to create that is represented by a named process that manages it and acts as its sole interface.
  • There is some configurable state that is relevant to the service as a whole, should be remembered, and you should not be forced to pass in as arguments every time you call for this work to be done.

For example, let’s say we have an artificial world written in Erlang. Let’s say its a game world. Let’s say mob management is abstracted behind a single mob manager service interface. You want to spawn a bunch of monster mobs according to rules such as blahlblahblah… (Who cares? The game system should know the details, right?) So that’s our task: spawning mobs. We need to spawn a bunch of monster mob controller processes, and they (of course) need to be supervised, but we shouldn’t have to know all the details to be able to tell the system to create a mob.

The bestiary is really basic config data that shouldn’t have to be passed in every time you call for a new monster to be spawned. Maybe you want to back up further and not even want to have to specify the type of monster — perhaps the game system itself should know generally what the correct spawn/live percentages are for different types of mobs. Maybe it also knows the best way to deal with positioning to create a playable density, deal with position conflicts, zone conflicts, leveling or phasing influences, and other things. Like I said already: “Who cares?”

Wait, what am I really talking about here? I’m talking about sane defaults, really. Sane defaults that should rule the default case, and in Erlang that generally means some sane options that are comfortably curried away in the lowest-arity calls to whatever the service functions are.  But from whence come these sane defaults? The service state, of course.

So now that we have our scenario in mind, how does this sort of thing tend to work out? As three logical components:

  • The service interface and state keeper, let’s call it a “manager” (typically shortened to “man”)
  • The spawning supervisor (typically shortened to “sup”)
  • The spawned thingies (not shortened at all because it is what it is)

How does that typically look in Erlang? Like three modules in this imaginary-but-typical case:

  • game_mob_man.erl
  • game_mob_sup.erl
  • game_mob.erl

The game_mob_man module represents the Erlang version of a singleton, or at least something very similar in nature: a registered process. So we have a definite point of contact for all requests to create mobs: calling game_mob_man:spawn_mob/0,1,... which is defined as

spawn_mob() ->
    spawn_mob(sane_default()).

spawn_mob(Options) ->
    gen_server:cast(?MODULE, {beget_mob, Options}).

 

Internally there is the detail of the typical

handle_cast({beget_mob, Options}, State) ->
    ok = beget_mob(Options, State),
    {noreply, State};
%...

and of course, since you should never be putting a bunch of logic or side-effecty stuff in directly in your handle_* function clauses beget_mob/2 is where the work actually occurs. Of course, since we are talking about common patterns, I should point out that there are not always good linguistic parallels like “spawn” ⇒ “beget” so a very common thing to see is some_verb/N becomes a message {verb_name, Data} becomes a call to an implementation do_some_verb(Data, State):

spawn_mob(Options) ->
    gen_server:cast(?MODULE, {spawn_mob, Options}).

%...

handle_cast({spawn_mob, Options}, State) ->
    ok = do_spawn_mob(Options, State),
    {noreply, State};

% ...

do_spawn_mob(Options, State = #s{stuff = Stuff}) ->
    % Actually do work in the `do_*` functions down here

The important thing to note above is that this is the kind of registered module that is registered under its own name, which is why the call to gen_server:cast/2 is using ?MODULE as the address (and not self(), because remember, interface functions are executed in the context of the caller, not the process defined by the module).

Also, are the some_verb/N{some_verb, Data}do_some_verb/N names sort of redundant? Yes, indeed they are. But they are totally unambiguous, inherently easy to grep -n and most importantly, give us breaks in the chain of function calls necessary to implement abstractions like managed messaging and supervision that underlies OTP magic like the gen_server itself. So don’t begrudge the names, its just a convention. Learn the convention so that you write less annoyingly mysterious code; your future self will thank you.

So what does that have to do with spawning workers and all that? Inside do_spawn_mob/N we are going to call another registered process, game_mob_sup. Why not just call game_mob_sup directly? For two reasons:

  1. Defining spawn_mob/N within the supervisor still requires acquisition of world configuration and current game state, and supervisors do not hold that kind of state, so you don’t want data retrieval tasks or evaluation logic to be defined there. Any calls to a supervisor’s public functions are being called in the context of the caller, not the supervisor itself anyway. Don’t forget this. Calling the manger first gives the manager a chance to wrap its call to the supervisor in state and pass the message along — quite natural.
  2. game_mob_sup is just a supervisor, it is not the mob service itself. It can’t be. OTP already dictates what it is, and its role is limited to being a supervisor (and in this particular case of dynamic workers, a simple_one_for_one supervisor at that).

So how does game_mob_sup look inside? Something very close to this:

-module(game_mob_sup).
-behavior(supervisor).

%%% Interface
spawn_mob(Conf) ->
    supervisor:start_child(?MODULE, [Conf]).

%%% Startup
start_link() ->
    supervisor:start_link({local, ?MODULE}, ?MODULE, []).

init([]) ->
    RestartStrategy = {simple_one_for_one, 5, 60},
    Mob = {game_mob,
           {game_mob, start_link, []},
           temporary,
           brutal_kill,
           worker,
           [game_mob]},
    Children = [Mob],
    {ok, {RestartStrategy, Children}}.

(Is it really necessary to define these things as variables in init/1? No. Is it really necessary to break the tuple assigned to Mob vertically into lines and align everything all pretty like that? No. Of course not. But it is pretty darn common and therefore very easy to catch all the pieces with your eyes when you first glance at the module. Its about readability, not being uber l33t and reducing a line count nobody is even aware of that isn’t even relevant to the compiled code.)

See what’s going on in there? Almost nothing. That’s what. The interesting part to note is that very little config data is going into the supervisor at all, with the exception of how supervision is set to work. These are mobs: if they crash they shouldn’t come back to life, better to leave them dead and signal whatever keeps account of them so it can decide what to do (the game_mob_man, for example, which would probably be monitoring these). Setting them as permanent workers can easily (and hilariously) result in a phenomenon called “highly available mini bosses” — where a crash in the “at death cleanup” routine or the mistake of having the mob’s process retire with an exit status other than 'normal' causes it to just keep coming back to life right there, in its initial configuration (i.e. full health, full weapons, full mana, etc.).

But what stands above this? Who supervises the supervisor?

Generally speaking, a component like mob monsters would be a part of a larger concept of world objects, so whatever the world object “service” concept is would sit above mobs, and mobs would be one component of world entities in general.

To sum up, here is a craptastic diagram:

Yes, my games involve wildlife and blonde nurses.

Yes, my games involve wildlife and blonde nurses.

The diagram above shows solid lines for spawn_link, and dashed lines to indicate the direction of requests for things like spawn_link. The diagram does not show anything else. So monitors, messages, etc. are all just not there. Imagine them. Or don’t. That’s not the point of this post.

“But wait, I see what you did there… you made a bigger diagram and cut a bunch of stuff out!”

Yep. I did that. I made an even huger, much crappier, more inaccurate diagram because I wasn’t sure at first where I wanted to fit this into my imaginary game system.

And then I got carried away and diagrammed a lot more of the supervision tree.

And then I though “Meh, screw it, I’ll just push this up to a rough imagining of what it might look like pushed all the way back to the SuperSup”.

Here is the result of that digression:

It wouldn't look exactly like this, so use your imagination.

It wouldn’t look exactly like this, so use your imagination.

ALL. THAT. SUPERVISION.

Yep. All that. Right there. That’s why its called a “supervision tree” instead of a “supervision list”. Any place in there you don’t have a dependency between parts, a thing can crash all by itself and not bring down the system. Consider this: the entire game can fail and chat will still work, users will still be logged in, etc. Not nearly as big a deal to restart just that one part. But what about ItemReg? Well, if that fails, we should probably squash the entire item system (I’ve got guns, but no bullets! or whatever) because game items are critical data. Are they really critical data? No. But they become critical because gamers are much more willing to accept a server interruption than they are losing items and having bad item data stored.

And with that, I’m out! Hopefully I was able to express a tiny little bit about one way supervision can be coupled with workers in the context of an ongoing, configured service that lives within a larger Erlang system and requires on-the-fly spawning of supervised workers.

(Before any of you smarties that have been around a while and point out how I glossed over a few things, or how spawning a million items as processes might not be the best idea… I know. That’s not the point of this post, and the “right approach” is entirely context dependent anyway. But constructive criticism is, as always, most welcome.)

Hazards of the Windows yen-mark backslash

The fact that Windows fonts still default to displaying backslashes as yen-marks has been a perennial annoyance for me. A conversation about it today provided a wonderful illustration of just how irritating this can be.

Here is what I saw:

yenmark1

Here is what someone else saw:

yenmark2

This is just a humorous example of technology gone stupid, but it can be a very real disaster in source code (escapes are suddenly uncertain) and quite a few pieces of small business software (and even modern websites right now) have ridiculous output problems where a price is listed as “\50,000” — which isn’t such a big deal until you have a grid with invisible borders and see something that suddenly looks like a pricing equation instead of a statement of price: “500,000pc \ 50,000” Oops!

KILL THIS WITH FIRE WHEREVER YOU FIND IT.

zUUID: An Example Erlang/OTP Project

I was talking with a friend of mine yesterday about how UUID v2 seems to have evaporated. We looked into things further and found its not actually included in RFC 4122! One thing led to another and I wound up writing an example project that is yet another UUID generator/utility in Erlang — but this time it actually has duplicate v1 and v2 detection/correction and implements as close to what I can find is defined as UUID version 2 values.

As there are already plenty of UUID projects around I focused on making this one as readable as I possibly could — to include exported documentation, in-source documentation, obvious variable names, full typespecs, my silly little “pure” notation, blatantly obvious bitstring syntax, and the obligatory github presence.

Hopefully some folks newish to Erlang will come along and explain to me what confuses them about that code, the process of writing it, the documentation conventions, etc. so that I can become a better literate programmer. Of course, since the last thing the world needs is another UUID implementation I suppose I would have had better luck with something at least peripherally related to the web. (>.<)

Messaging: What Will It Do?

[Part 2 of a short series on messaging systems. (Part 1)]

Having implemented messaging systems of various sizes and scopes in all sorts of environments, I’ve come up with a few guidelines for myself:

  1. If messaging is not the core service, make it an orthogonal network service.
  2. If possible make the messages ephemeral.
  3. If the messages must persist use the lightest storage solution possible and store as little as possible.
  4. Accept that huge message traffic will mean partitioning, partitions will be eventually consistent, and this is OK.
  5. You don’t need full text search.
  6. If you really do need full text search then use a DB that is built for this — its a major time-sink to hack it in later and get it right.
  7. If the messages are threaded annotations over other existing relational data, swallow your pride and consult your DBA.
  8. VERSION. Your. DATA. And. PROTOCOLS.
  9. If anything about messages over existing data records feels hacky, awkward, or like it might put pressure on the existing DB, separate message storage and accept that some data integrity may be delayed or lost from time to time.
  10. Messaging is likely more important to your users than you (or they) think it is.
  11. The messages themselves are likely less important to your users than you (or they) think they are.
  12. If you can skip a feature, DO.
  13. YAGNI.
  14. You don’t need [AdvancedMessagingProtocol] (aka XMPP).
  15. If you *really* need XMPP it will be painfully obvious (and if you do chances are you certainly don’t need the extensions).
  16. [insert more things about avoiding feature creep]

Adding messaging to an existing system can get a little messy if you’re not really sure why you are doing it. If you know that your users really do have a need for messaging within your system, but you don’t know what features or type of messaging it should be, then think carefully about how you would want to use it as a user yourself.

Do users work together within your system to accomplish immediate goals? A concurrent multiuser system (multiplayer game, concurrent design tool, pair programming environment, etc.) benefits most from an ephemeral, instant chat system that centers around the immediate task. When the task is over the messages become meaningless and should be allowed to decay. It may be nice to give users an easy way to export or save significant messages, but that is really a client-side issue and is orthogonal to how the messaging system itself works.

Is the system used to coordinate real-world tasks or events? A real-world coordination system benefits most from a threaded comment/discussion system that centers around those tasks, and must only enhance but not replace the existing task-assignment and tracking features of the system. Threaded annotation is powerful here, but the threads need only persist as long as the task records do and messaging should never be mistaken for a task assignment tool (it can be leveraged as a part of task notification but never assignment or tracking). Remember ON DELETE CASCADE? This is where it is super helpful.

Do users self-organize into groups whose sole purpose is communication? Social group systems benefit most from mail systems implemented directly within the system they use to organize themselves. Such systems may also benefit from some form of ephemeral immediate chat, but it is critical that we keep in mind that that indicates a need for two different messaging systems, not One Message System To Rule Them All.

Many different flavors of message systems exist between the extremes of “persistent point-to-point mail” and “ephemeral, instant, channelized (group) chat”. Consider:

  • Persistent chat (Campfire, SocialObstructionist StackOverflow chat, etc.))
  • Message boards (“forums” — though this term is far broader in actual meaning…)
  • Usenet-style newsgroups
  • Mailing list systems
  • Email bridges
  • IRC + bridges + bots
  • Anything else you can imagine…

Some other things to think about are the nature of the users relationship to one another. That’s not just about communication channels and point-to-point delivery issues, it is also about what concept of identity exists within the system. Is the system one with strong authentication, total or partial anonymity, or a hybrid? This will dictate everything about your approach to permissions — from moderation, channel creation and administrative control to whether private messages are permissible and have a huge impact on what the implementation of a messaging system will require in terms of access to the original host system it is intended to support.

The issues of identity, authentication and public visibility are largely orthogonal to questions of persistence duration, storage and message-to-record relationship, but they can become intertwined issues without you realizing it whenever it comes time to design the storage schema, the serialization format(s), or the protocol(s). Of course these are three flavors of basically the same issue — though the modern trend is shy away from thinking about this either by hiding behind HTTP (like, uh, who even knows how to program sockets anymore? zomg!) and sticking your fingers in your ears when someone says that “schemaless JSON is the schema” or that XML can do the job of all three because it is the pinnacle of data representations. Consider whether this may change in the future. Keep YAGNI in mind, but when it comes to schemas, serialization and protocols it is always good to design something that can be extended without requiring core modification.

Messaging: Why Would You Do This?

[Part 1 of a short series on messaging systems. (Part 2)]

Messaging is a feature that seems to wind up on the ZOMG MUST HAVE!!1! feature list long after initial system deployment quite a bit these days. I’ve been getting asked about this a lot lately, so I’ve written down my generic advice here for reference. (If you want more specific advice feel free to contact me, though — its interesting and fun to hear about all the different things people are working on.)

I’ve been implementing and deploying messaging systems in one form or another since I first got involved with computers as a kid in the very late 80’s and early 90’s. Back then most multiplayer games were basically text message dispatch and routing systems with thematic computation surrounding those messages, so this was an area quite a few people my age dealt with — without realizing that it would one day be considered an “enterprise” feature. (And to think, we used to do this in Pascal, and assembler, and C, and hilariously inadequate variants of BASIC… as children! That nonsense was only reasonable to us because they were the only tools we had, those were labors of love, and we didn’t know any better.)

The most important thing to recognize about messaging systems is that the term “messaging” is hopelessly broad. It doesn’t really mean anything by itself. Every over-the-wire protocol is a message system definition, for example. Every drop-it-in-a-spool system is also a messaging system. Every combat notification system in a game is a messaging system. Every websocket thing you’ve ever done is a messaging system. This list could go on for quite a while, but I assume you get the idea.

Messaging systems take on different characteristics depending on their context of use. Some messaging systems are persistent, some are ephemeral, some have selective decay, some are instant, some are asynch, some are channeled, some are global, some are private, some are selective-access, some are moderated, some include extra semantic data that can be interpreted in the message body, some run everything through a central data service (“ooooh, ahhhh, ‘the cloud'”), some are peered, some are anonymous, some are verified, some broadcast, some are point to point, some are free, some are paid in per-message increments, some are paid by aggregate use, etc. The list keeps going as long as you can think of things that can be said about any system, actually.

It is important to keep in mind that most of the adjectives in the last paragraph are not mutually exclusive. Here is the fun part, though: if you need two that are, then you need two messaging systems. (That’s why they make chocolate and vanilla, after all.)

That’s a lot of different aspects to something as conceptually simple as “messaging”. The obvious problem there is that messaging is actually not simple at all in practice. When you decide that you need a “messaging solution” you really must carefully consider what that means for your users and your system. Your users’ state of mind, user experience, utility value of messaging and desire to use messaging in the context of their use of your system are all folded together.

Consider this: Email is a messaging system and we’ve all already got that, why is this not sufficient for your use? If you can’t answer that question quickly then sit back and let it stew for a bit — you will learn something about your users in the process of answering this question because it forces you to think for a moment as a user of your system instead of as a builder of it.

Let’s consider some reasons you might want to add a killer messaging feature to your existing product or system.

  • You can’t figure out the difference between “organic pageviews” and traffic resulting from the robot invasion.
  • You want more pageviews and you don’t care how you get them — messaging, you think, can be a source of bajillions of clicks!
  • Your users deal with data objects about which they constantly converse, but can’t do so within your system.
  • Your users have things to express that are best represented in a way that captures the history of their thoughts in sequence.
  • Your native applications do exactly what they should, but breaking out of it to send a message is distracting.
  • Your system already passes data among users, but requires human annotation to be fully meaningful.
  • Your system deals in machine-to-machine messages already, but users still have to call each other on the phone.
  • You are looking for a way to attach your brand to one more thing users see every day to reinforce the hypnotic effect you wish your brand had on them.
  • You require an anonymous(ish) way for users to communicate out-of-band relative to other online services.

A conditional flow would look something like this:

stupid_reasons = [pageviews,
                  domain_valuation,
                  brand_enhancement,
                  pagerank,
                  BULLSHIT_WEB_IDEAS(SEO && ALL_FLAGS),
                  ADS([ad_search, ad_rank, ad_tech, ad_{*}, ...]),
                  ...];

if (member(your_reason, stupid_reasons)) {
    if (idea_of(You) || you_are(TheManagement)) {
        admit(being_wrong);
        assert cancer_killing_internet = You;
        abort(stupid_idea);
        quick_grieve(dying_product);
    } else if (idea_of(TheManagement)) {
        abort(current_job, GRACEFULLY);
        // It won't last much longer anyway.
    }
    acquire(funding_contacts);
    contact(me, IRL);
    // I've got way better ideas.
    // You should work with me instead of sticking with losers.
} else if (knee_jerk(messaging_feature)) {
    abort(stupid_idea);
    focus(core_features);
} else {
    congratulate(You);
    panic(FEATURE_BURDEN);
    CAREFULLY(implement());
}

quick_grieve(hopeless_thing)
{
    shock(hopeless_thing, DEAD);
    anger(cant_coerce([model, users, investors, world]),
          TANTRUM_LEVEL(11));
    bargain([me, you, imaginary_entities, investors, users, world, HAL],
            perceived_value(hopeless_thing, PERSPECTIVE(MYOPIC)));
    mourn(hopeless_thing);
    accept(DEFEAT);

    return AGGRESSIVE_OUTLOOK([POSITIVE && EXPERIENCED]);
}

Did you find yourself at congratulations(), or somewhere else? Stop and think for a while if your reason belongs in the stupid_reasons list or not. How much will it actually enhance your users’ experience with the product or interaction with each other or you? Are you going to use it yourself, or are you going to continue to use email, IRC, whatever corporate chat service is popular this week, Twitter, etc? (And if you say “Well, I/we don’t use my/our product, so I don’t know…” then you are already rather far beyond admitting defeat, you just don’t know it yet.)

Be painfully honest with yourself here because messaging is one of those things that you can’t add in and then just take away later without consequences. Even if very few people use it and it turns out to be a technical burden to you someone will wind up using it no matter how shitty it turns out to actually be, a few of these people will come to depend on it, and to these people you will be an asshole if you ever remove it. Don’t be an asshole to customers. That’s how you wind up with a “[your_product]sucks.com” — and the kind of users who come to depend on weird internal messaging systems that suck are exactly the kind of people who will register a domain like that and talk shit about you. That crap will stay on the web forever and the stink may stay on your product forever, even if you change your product and they come back loving it.

So let’s assume that your system will exist in a context where it really does add value to the user’s life in some way. goto congratulations()! Messaging will be a big win if done properly and unobtrusively. As a reward for being diligent and thoughtful you now you get to wade through a swamp of design issues. But don’t worry, I’ll give you the fanboat tour and point the way through the muck.

Next we’ll look at some common features of messaging systems, what they mean for your implementation and most importantly what they mean for your users within the context of your system. [Continue to Part 2]

間接的な影響

I just realized that it is futile to drop hints to my wife about, say, a snack before dinner. The master play is to talk about snacks with my kids before dinner, and they will always find a way to deliver the goods.

anna_the_persuasive3

Evidence of real power: badass snacks.

Pure Declarations in Erlang

Over the last year or so I’ve gone back and forth in my mind and in discussions with other Erlangers about type systems in Erlang, or rather, I’ve been going back and forth about its lack of one and the way Dialyzer acts as our bandaid in this area. Types are useful enough that we need Dialyzer, but the pursuit of functional puritanism gets insane enough that its simply not worth it in a language intended for real-world production use, especially in the messy, massively concurrent, let-it-crash, side-effecty, message-centric world of Erlang.

But… types and pure functions are still really useful and setting a goal of making as much of a program as possible into provable, bounded, typed, pure functions tends to result in easy to understand, test and maintain code. So there is obviously some stress here.

What I would like to do is add a semantic that the compiler (or Dialyzer, but would prefer this be a compiler check, tbh) be aware of what functions are pure and which are not. The way I would do this is by using a different “arrow”, in particular the Prolog-style declaration indicator: :-

[Edit after further discussion…] What I would like to do is add a directive that Dialyzer can interpret according to a simply purity rule. Adding this to Dialyzer makes more sense than putting it in the compiler — Dialyzer is already concerned with checking; the compiler is already concerned with compiling.

The directive would be -pure(Name/Arity) (a compliment to -spec). The rule would be very simple: only guard-permissible BIFs and other pure functions are legal from within the body of a pure function. This is basically just an extension of the current guard rule (actually, I wonder why this version isn’t already the guard rule… other than the fact that unless something like this is implemented the compiler itself wouldn’t have any way of checking for purity, so currently it must blindly accept a handful of BIFs known to be pure and nothing else).

For example, here is a pure function in Erlang, but neither the compiler nor Dialyzer can currently know this:

-spec increment(integer()) -> integer().
increment(A) ->
    A + 1.

Here is the same function declared to be pure:

-pure(increment/1).
-spec increment(integer()) -> integer().
increment(A) ->
    A + 1.

Pretty simple change.

“ZOMG! The whold standard library!” And yes, this is true — the whole thing is out. Except that the most important bits of it (the data structures like lists, dict, maps, etc.) could be easily converted to pure functions with little more than changing -> to :- adding a single line to the definition.

Any pure function could be strongly typed and Dialyzer could adhere to strong types instead of looser “success types” in these cases. Some code that is currently written to take an input from a side-effecty function, pass it through a chain of non-returning and possible side-effecty functions as a way to process or act on the value, and ultimately then call some side-effecty final output function would instead change to a form where the side-effects are limited to a single function that does both the input and output, and all the processing in-between would be done in pure functions.

This makes code inherently more testable. In the first case any test of the code is essentially an integration test — as to really know how things will work requires knowing at least one step into side effects (and very often we litter our code with side-effects without a second thought, something prayer-style monadisms assist greatly with). In the second case, though, the majority of the program is pure and independently testable, with no passthrough chain of values that have to be checked. I would argue that in many cases such passthrough is either totally unnecessary, or when it really is beneficial passing through in functions is not as useful as passing through in processes — that is to say, that when transformational passthrough is desired it is easier to reason about an Erlang program as a series of signal transformations over a message stream than a chain of arbitrarily side-effecty function calls that collectively make a recursive tail-call (and that’s a whole different ball of wax, totally orthogonal to the issue of functional purity).

Consider what we can know about a basic receive loop:

loop(State) ->
  receive
    {process, Data} ->
        {ok, NewState} = do_process(Data, State),
        loop(NewState);
    {send_state, From} ->
        From ! State,
        loop(State);
    halt ->
        exit(normal);
    Message ->
        ok = log(unexpected, Unexpected),
        loop(State)
  end.

-spec do_process(term(), #state{}) -> {ok, #state{}} | {error, term()}.
do_process(Data, State) :-
    % Do purely functional stuff
    Result.

-spec log(category(), term()) -> ok.
log(Cat, Data) ->
    % Do side-effecty stuff
    ok.

We can see exactly what cases result in another iteration and which don’t. Compare that with this:

loop(State) ->
  receive
    {process, Data}     -> do_process(Data, State);
    {send_state, Asker} -> tell(Asker, State);
    quit                -> exit(normal);
    Message             -> handle_unexpected(Message, State)
  end.

do_process(Data, State) ->
    % Do stuff.
    % Mutually recursive tail call; no return type.
    loop(NewState).

tell(Asker, State) ->
    % Do stuff; another tail call...
    loop(State).

handle_unexpected(Message, State) ->
    ok = log(unexpected, Message),
    % Do whatever else; end with tail call to loop/1...
    loop(NewState).

I like the way the code lines up visually in the last version of loop/1, sure, but I can’t know nearly as much about it as a process. Both styles are common, but the former lends itself to readability and testing while the latter is a real mixed bag. Pure functions would keep us conscious of what we are doing and commit our minds in ways to the definite-return form of code where our pure functions and our side-effecty ones are clearly separated. Of course, anyone could also continue to write Erlang any old way they used to — this would just be one more tool to assist with breaking complexity down and adding some compile-time checking in large systems.

I would love to see this sort of thing happen within Erlang eventually, but I am pretty certain that its the sort of change that won’t happen if I don’t roll up my sleeves and do it myself. We’ve got bigger fish to fry, in my opinion, (and I’ve certainly got higher priorities personally right now!) but perhaps someday…