Speaking of police states, whatever happened to StackExchange?

StackOverflow, StackExchange, etc. and its whole community rather saddens me these days. It seems the assburgers who dump their angst on Wikipedia have found a new home making StackExchange sites as unfun as possible. The situation isn’t helped at all by the recent rabbit-like proliferation of SE sites which exhibit so much subject overlap as to make determining the “proper” place of most posts nearly impossible — all that has done is given the Post Police reason, in every case, to force re-posting, closing, pausing, deletion, or otherwise official vandalism of valid posts and answers.

Let’s say I have a question about travelling to Austria to see historical sites. Let’s put it in travel. But if the question is phrased in such a way as to reference a specific historical event, like say a little battle in which Polish king demonstrated how much he appreciated the Ottoman visitors, then it is an almost certain thing that some twerp who knows nothing about history or travelling in Austria will promptly vote to close or move the question on the grounds that it belongs on the SE site for history and not travel. The post will face the same fate once it arrives there, of course, and the asker will be helpless.

This sort of thing goes on all the time between StackOverflow, “Programmers”, and “Workplace” as many questions professional programmers have cut sharply across all three lines.

Another annoyance is unnecessary Post Police comments like the one left on this post. Sure, I was being silly in the way I did it, but my response represents the general consensus of the Erlang community on this particular subject and is, in essence, a fairly standard response. Obviously not good enough for 19-year-old Alexy Schmalko, Protector of the Realm.

Whatever happened to the sense of community I used to get from, you know, the community? Did that all die when usenet got retarded in the early 90’s? Did it evaporate with the coming of the cool web kids? I’ve probably got kids somewhere near his age…

The Lightweight Nature of Erlang Processes

Understanding the difference between Erlang processes and OS processes can be a bit confusing at first, partly because the term “process” means something different in each case, and partly because the semantics of programming terms have become polluted by marketing, political and religious wars. A post to the Erlang questions mailing list asking why Erlang processes are so fast and OS processes are so slow reminded me of this today.

Erlang processes are more similar to the “objects” found in most OOP languages than the “processes” managed by an OS kernel, but have a proper message passing semantics added on in a way that abstracts the OS network, pipe and socket mechanisms. We wouldn’t be surprised if the Python runtime handled its objects with less overhead than the OS kernel handles a process, of course, and it should come as no surprise that the Erlang runtime handles its processes with less overhead than the OS kernel. After all, a Python “object” and an Erlang “process” are very nearly the same thing underneath.

Most OOP runtimes implement “objects” as a special syntactical form of a higher order function, one that forms a closure around its state, includes pointers to methods as a part of that state (usually with their own special syntax that abstracts the difference between a label, a pointer and a variable) and returns a dispatch function which manages access to its internal methods. Once you get down to assembly, this is the only way things work anyhow (and on von Neuman architectures there is exactly zero difference between pointers to data, pointers to data, instructions and pointers to a next instruction). If you strip that special syntax away there is no practical difference between directly writing a higher order function that does this and using the special class definition syntax.

Even in a higher language the higher-order functional nature of an “object”s class definition can be illustrated. For example, the following Python class and function definitions are equivalent.

class Point():
    def __init__(self, x=0, y=0):
        self.x = x
        self.y = y

    def set_x(self, x):
        self.x = x

    def set_y(self, y):
        self.y = y

    def get_x(self):
        return self.x

    def get_y(self):
        return self.y

def gen_point(x=0, y=0):
    coords = {"x": x, "y": y}

    def set_x(x):
        coords["x"] = x

    def set_y(y):
        coords["y"] = y

    def get_x():
        return coords["x"]

    def get_y():
        return coords["y"]

    def dispatch(message, value=0):
        if message == "set x":
        elif message == "set y":
        elif message == "get x":
            return get_x()
        elif message == "get y":
            return get_y()
            return "Bad message"

    return dispatch

We would be utterly unsurprised that both the class definition and the function definition return entities that are lighter weight than OS processes. This is not so far from being the difference between Erlang processes and OS processes.

Of course, the above code is ridiculous to do in Python either way. The whole point of the language is to let you avoid dealing with this exact sort of code. Also, Python has certain scoping rules which are designed to minimize the confusion surrounding variable masking in dynamic languages — and the use of a dictionary to hold the (X, Y) state is a hack to get around this. (A more complete example that uses explicit returns and reassignment is available here.)

For a more direct example, consider how this can be done in Guile/Scheme:

(define (point x y)
  (define (setter coord value)
    (cond ((eq? coord 'x) (set! x value))
          ((eq? coord 'y) (set! y value))))
  (define (getter coord)
    (cond ((eq? coord 'x) x)
          ((eq? coord 'y) y)))
  (define (dispatch m)
    (cond ((eq? m 'set) setter)
          ((eq? m 'get) getter)
          (else (error "point: Unknown request"))))

OOP packages for Lisps wrap this technique in a way that abstracts away the boilerplate and makes it less messy, but its the same idea. This can be done in assembler or C directly as well. Equivalent examples are a bit longer, so you’ll have to take my word for it. (A commented version of the Guile example above can be found here.)

While OOP languages typically focus on access to state and access to methods as state, Erlang focuses like a laser on the idea of message passing. Easy, universal access to state in OOP languages makes it natural to do things like share state, usually by doing something innocent like declaring a name in an internal scope that points to an independent object from somewhere outside.

Erlang forbids this, and forces all data to either be a part of a the definitions that describe the process (things declared in functions or their arguments), or go through messages. Combined with recursive returns and assignment in a fresh scope (akin to the last Python example in the extra code file) this means state is effectively mutable and side effects can occur without violating single assignment, but that everything that changes must change in an explicit way.

This restriction comes at the cost of requiring a sophisticated routing and filtering system. Erlang has an unusually complete message concept, going far beyond the “signals and slots” style found in some of the more interesting OOP systems. In fact, Erlang goes so far with the idea that it abstracts message, filters, a process scheduler and the entire network layer with it. And hence we have a very safe environment for concurrent processing — using “processes” that certainly feel like OS type processes, but are actually named locations Erlang’s runtime keeps track of in the same way an OOP runtime does objects, functions and other declared thingies. They feel like OS processes because of the way Erlang handles access to them in the same way that Java objects feel like my mother-in-law’s purse because of the way the JVM handles access to them — but underneath they are much more alike each other than either are to OS processes.

In the end, all this stuff is just long lines of bits standing in memory. The special thing is the rules we invent for ourselves that tell us how to interpret those bits. Within those rules we have various ways of declaring our semantics, but in the end the lines of bits don’t care if you think of them as “objects”, as “processes”, as “closures”, as “structs with pointers to code and data” or as “lists of lists with their own embedded processing rules”. OSes have particularly heavy sets of rules regarding how bits are accessed and moved around. Runtimes tend not to. Erlang “processes” are of a kind with Python “objects”, so we shouldn’t be surprised that they are significantly lighter weight than the “processes” found in the OS.

Republication of GNU’s Guile 2.0 Manual

I’ve republished the “one page per node” html version of the GNU Guile 2.0 Manual here. I’ve been referencing the manual quite a bit lately and noticed that, at least from my location, the site responds quite slowly at times. So I’m mirroring the manual where I know its fast and I can always find it.

I hope that the reason the GNU servers are responding slowly is legitimate load and not something more sinister. Attacks on community sites are one of the more stupid forms of tragedy that these commons experience.

On the Meaning of “Carbon Neutral”

I noticed that a few products in my house have begun to proudly proclaim themselves as being “carbon neutral” over the last few months. Apparently this is among the new set of empty phrases marketing people feel are necessary to distinguish their otherwise ordinary commodity products from identical products of comparable quality. It used to be “Made in U.S.A.” or “日本製” (depending on the neighborhood), then it was “low sodium”, then “waterproof”, then “low fat” then “low transfat” then “cholesterol free” then “omega-3″ then something else I probably forgot.

The problem isn’t that any of these things aren’t potentially good selling points, its that they usually don’t apply to the things I see the labels on. For example, I remember seeing an electric wok that said “Made in U.S.A.” on the bottom. I’m not so sure that’s the best thing to concern one’s self with when buying a cooking apparatus that originated in another hemisphere. That’s like buying a tuna steak because the sticker on the package marks it as being “a peanut-free product” or believing that a piece of software is high quality because its written in Java (or even more uselessly, “utilizes Java technology!”).

This reminds me of my sister’s enlightening tale of the truth behind the now heavily regulated terms “organic” and “all natural” as applied to food labels. She did her undergraduate study in genetics and graduate work in nutrition, worked in colon cancer research for a while, started a dietary medicine program at a hospital in Virginia a few years back, and now (after reaching some disillusionment with the state of government-funded research) raises “range fed Angus beef” as a side interest. She is therefore directly governed by some of the more hilarious regulations the FDA has come up with.

Needless to say, her opinion on the value of these buzzwords has much more influence to me than whatever a “medicinal cannabis expert” has to tell me about the benefits of toking up or the local yoga girl at the gym has to tell me about the benefits of yogurt shakes or almond oil or peanut-butter enemas or whatever it happens to be this week (of course, she’s just right about the benefits of sex in exciting places). In short, the regulations governing terms such as “organic” and “natural flavor” (or even the way the term “X% fat free” is permitted to be used) are both economically draining legally apply due to the administrative overhead of regulatory compliance and yet so full of loopholes that there is really no clear distinction between a head of lettuce that is “organic” and one that isn’t so labeled. Essentially the only difference is the price of the market package.

Of course, the real difference is that the lettuce sporting an “organic” sticker on it is almost undoubtedly produced by a large agribusiness firm that can afford the overhead of doing all the pencil-drills necessary to proclaim their lettuce to be “organic”. Either that, or it is quite pricey lettuce only rich folks who feel the need to spend more to sate their moral thirst can afford, grown at an “organic” farm run by one savvy businessman and a flock of altruist peons bent on saving humanity from itself one vegetable at a time. I’m certainly not saying that large agribusiness is bad — ultimately its the only way we’re going to survive over the long-term (and here I’m including post colonization of space) — but that the terms used on packaging are enormously deceptive in nature.

But that’s food. It is a specific example where it is relatively easy to trace both the regulatory documentation and the research literature. Of course, very few people actually track all that down — other than unusual people like my sister who happen to be trained in genetics, directly involved in agriculture, and so habituated to both scientific and regulatory research that they find nothing daunting about navigating the semantic labyrinth the FDA has let agricultural regulation become in the US (and the phrase “let…become” could easily be replaced with “deliberately made of…”). I suppose the problem isn’t that few people track all that down, really; its more a problem that even if my sister were to go to the trouble of explaining what she knows to the average consumer they wouldn’t have the faintest clue what she was getting at. The average consumer is instead faced with an essentially religious (or at least dogmatic) choice of whether to trust someone that has a stack of official paper backing up her credibility, or a government agency and a huge food industry which are both populated by thousands of people who each have every bit as much officious documentation backing up their reputations.

And that brings me back to “carbon neutral”. We still chase the purported value of demonstrably empty terms such as “cloud computing”, demonstrably failed vehicles such as “social networking”, and demonstrably flimsy labels such as “organic” and “all natural”. But we don’t stop there. We are jumping head-first onto the “carbon neutral” bandwagon as well. The point isn’t that we shouldn’t be concerned with the terrestrial environment, but rather that we must at all times guard against political forces that constantly seek to invent new social mores and foist them on us by conjuring meaning into empty phrases like “carbon neutral”. It tricks you not just into buying ordinary thing A over ordinary-but-cheaper-thing B, but also into feeling morally superior. In this it is indistinguishable from other dogmatic rhetoric that engenders an unfounded sense of moral certainty. If we thought convincing people that a man in the sky doesn’t want them to fly airplanes into office buildings was hard, consider how much more difficult it is to convince average people who genuinely want to “do good” that reasonablish sciency words are nothing more than unfounded political siren songs trying to open one more door for the tax man.

So back to the reasonablish sciency phrase “carbon neutral”… what does it mean? This is where I have some semantic issues, mainly because nobody really knows.

Let’s say, for example, that we start a paper mill. We’ll make paper, but only from recycled paper and only using wind energy. This could probably qualify as being entirely “carbon neutral”. But so could the same paper mill if it planted its own trees. But what about the wind generators? They have to come from somewhere. What about the diesel-powered trucks that carry the old paper stock to the recycling mill? What about the initial material itself? Are we being carbon neutral if we don’t go replace as many trees as our recycled stock represents? How about the electricity used by the paper-compactors run by other companies we have no control over? What about our employees’ cars they use to get to work? What about all the flatulence the invite by eating pure vegan meals?

The initial production itself would almost certainly not qualify as being “carbon neutral” — which demonstrates that we have to draw an arbitrary line somewhere from which we can derive a meaning for the term “carbon neutral”. It is almost certain that something, whether directly or indirectly, involved an increase in carbon emissions (and the meaning of “direct” and “indirect” really should be their own battlegrounds here, considering what people think the term “carbon neutral” means) somewhere at some point, otherwise there wouldn’t be people to buy our recycled earth-friendly paper to begin with.

But what are “carbon emissions”? This is, apparently, intended to only refer to releasing carbon into the air. Consider for a moment how monumentally arbitrary that is. There are currently some well-intended, but enormously misguided efforts to “sequester” carbon by burying it in the crust of the Earth. This, of course, represents an enormously heavy emission of carbon into the environment, but we are calling this a “good” emission (actually, we refrain from using the word “emission” because we intend that to be a “bad” word) because it is going into the ground and not the air. Incidentally, it is also not going into something useful like diamond-edge tools or nano insulators or any other beneficial process that is desperate for carbon (which our planet happens to be poor in by cosmological standards).

So where did all this “bad” carbon come from? If you believe the press, its coming from our SUV exhaust, coal-burning plants, Lady GaGa (well, she might be a Democrat, in which case she can’t be bad), and pretty much anything else that humans use to modify local order at the expense of a probable increase in universal entropy.

Where did the carbon come from for the coal, crude, natural gas and bovine flatulence? Probably from the atmosphere and the sea. At least that’s what a biologist will tell you.

And here is a problem for me. Nobody has explained (not just to my satisfaction, but explained at all) where all the billions of tons of carbon necessary to create the forests that created the coal (and possibly crude oil) came from in the first place.

Well, that’s not quite true. In the first place it came from a long-dead stellar formation, some crumbs of which clumped together to form our solar system. That’s the first place. So the second place. Where did the carbon for all this organic activity come from in the second place? Was it distributed evenly in the early Earth? Has it always been a fixed quantity in the atmosphere? Does it boil out of the molten terrestrial substrate and gradually accumulate in the atmosphere and ocean?

If the forests grew in the first place then the carbon was in the air, at least at one point. If it is a fixed amount of atmospheric carbon then the growth of the forests and their subsequent demise and burial beneath sediment represents an absolutely massive sequestration of atmospheric carbon. If it is indeed a fixed amount, then the absolutely huge amounts of flora and fauna represented by these forests were not prevented from thriving in an atmosphere which contained a huge amount more carbon than the atmosphere contains today. If that is true, then either climate change is not affected much by the carbon content of the atmosphere, or a changed climate does not pose much of a threat to organic life on Earth.

Some parts of the fixed atmospheric quantity scenario don’t really add up. Despite a bit of effort I’ve only managed to scratch the surface of the ice core research literature, but a static amount of available atmospheric carbon doesn’t seem to be the story research efforts so far tell. This area of research has been made amazingly difficult to get a straight tack on ever since environmental sciences got overheated with politically-driven grants looking for results that validate political rhetoric instead of grants seeking research into what is still largely an observational field, but it seems fairly clear that there have been fluctuations in atmospheric carbon content that do not directly coincide with either the timing of ice-ages or the timing of mass terrestrial forestation. (The record is much less clear with regard to life in the ocean — and this could obviously be a key element, but it doesn’t seem that many people are looking there, perhaps because the current rhetoric is full of fear of rising sea levels, not full of hope for a marine component to the puzzle of eternal human salvation). That said, there must be some pretty massive non-human sources of atmospheric carbon which have been in operation millions of years before we evolved (as for where trillions of tons of carbon may have gone, I think the huge coal formations may be an indication).

While the idea that a carbon-rich atmosphere providing adequate conditions for thriving terrestrial life might seem odd (at least when compared with the “Your SUV is killing the Earth!” dialog), the idea that the Earth itself has both mechanisms to gradually ratchet up the total amount of carbon in the atmosphere over the eons and to drastically change the climate in spans measured in mere years (not decades, not centuries or millenia) without human or even atmospheric input is pretty scary.

A lot more scary than the idea that driving my kids to school might be damaging in some small way.

But this isn’t the way we are thinking. We are letting marketers and politicians — two groups infamous for being as destructively self-serving as possible — sell us a new buzzword stew, and we, the consumers, are ready to confidently parrot phrases such as “carbon neutral” about as if they mean something. “Oh, Irene, that salad dressing is 100% organic and carbon neutral — you’re such a gourmet!”

We’re clearly having the wool pulled over our eyes, except this time it doesn’t just play to our ego-maniacal craving to live forever (“If you eat gluten-free yogurt and drink positive-ion water you’ll live forever — and have huge tits/a thicker penis/ungraying hair/a tiny waist!”), it engenders a dangerous sense of moral superiority (“I’m doing this for the planet/global socialism/God/The Great Leader!”) which tends to eliminate all possibility of rational thought on a subject which could indeed affect us all.

What if, for example, the Atlantic currents are just panting their last, barely keeping us away from a global mass cooling event? We won’t just be blind to the threat because we’ve blown our research money on politically driven quests to generate the academic support necessary to pursue whatever new pork-barrel projects we come up with over the next decade or two — we will deny the very idea that a threat other than carbon emissions could possibly exist on moral grounds because we’ve already identified the “real enemy” (wealthy people in SUVs who come with the added benefit of being fun to hate). That’s dangerous.

Words mean things. We should remember that.

Using the alternatives command

A quick demonstration of using alternatives:

[root@taco ~]# cat /opt/foo/this
#! /bin/bash
echo "I am this."
[root@taco ~]# cat /opt/foo/that
#! /bin/bash
echo "I am that."
[root@taco ~]# cat /opt/foo/somethingelse 
#! /bin/bash
echo "I am somethingelse."
[root@taco ~]# which thisorthat
/usr/bin/which: no thisorthat in (/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
[root@taco ~]# alternatives --install /usr/bin/thisorthat thisorthat /opt/foo/this 1
[root@taco ~]# alternatives --install /usr/bin/thisorthat thisorthat /opt/foo/that 2
[root@taco ~]# which thisorthat
[root@taco ~]# ls -l /usr/bin/thisorthat 
lrwxrwxrwx. 1 root root 28 Mar 23 10:38 /usr/bin/thisorthat -> /etc/alternatives/thisorthat
[root@taco ~]# ls -l /etc/alternatives/thisorthat 
lrwxrwxrwx. 1 root root 13 Mar 23 10:38 /etc/alternatives/thisorthat -> /opt/foo/that
[root@taco ~]# thisorthat
I am that.
[root@taco ~]# alternatives --set thisorthat /opt/foo/that 
[root@taco ~]# thisorthat
I am that.
[root@taco ~]# alternatives --set thisorthat /opt/foo/somethingelse 
/opt/foo/somethingelse has not been configured as an alternative for thisorthat
[root@taco ~]# alternatives --display thisorthat
thisorthat - status is manual.
 link currently points to /opt/foo/that
/opt/foo/this - priority 1
/opt/foo/that - priority 2
Current `best' version is /opt/foo/that.

Once upon a time, in a time long forgotten, the alternatives command on Fedora-descended systems (RHEL, Scientific Linux, CentOS, etc.) seemed a magical thing. It permitted two versions of the same utility to exist on a system at the same time by automatically switching links from a canonical path to the actual versioned path when necessary. Very cool. This was so cool, in fact, that other system utilities came to rely on it, namely, the post-install section on quite a few RPMs calls alternatives to set up parallel versions of things like language runtimes.

The downside to this automation is use of alternatives itself seems to have become a lost art (or possibly its that the popularity of this distro family has simply diluted the knowledge pool as the user ranks have swelled with newcomers who simply take the automation for granted).

It should be noted that alternatives is a system-wide command, so when root sets an alternative, it affects everyone’s view of the system. It should be further noted that this problem has other, but very similar, solutions on other systems like Gentoo and Debian. Gentoo’s eselect system is a bit more sophisticated and can manage families of alternatives at once (links to a number of disparate language utilities which have to change in arbitrary ways based on which underlying runtime is selected, for example). Fortunately its not very difficult to write a wrapper for alternatives that can provide a similar experience, but unfortunately I don’t have the time this morning to get into that here.

Source or Satire?

From time to time I encounter openly discoverable code that is so wild in nature that I can’t help but wonder if the author was writing a machine function or a satirical statement.

Groovy source: ArrayUtil.java

After spending a few days plowing through Java code at the outset of a new Android project I found myself checking around for practical alternatives. In the course of that search (which netted Scala, Groovy and Clojure, in descending order of easy tooling for Android) I stumbled across this gem of a source file in the Groovy codebase. At first I couldn’t really tell if this was a joke about Java’s expressiveness or a functioning bit of code, but then I realized it is actually both — all the more funny because its expressing a cumbersome optimization that will execute on the same JVM either way:


Breach: A browser as practical satire

Someone from the Erlang world was kind enough to paste a link to Breach — a browser written in node.js. Its so full of meta fail and manifests the very essence of hipster circular logic that… I can only assume it is satire in the same vein as INTERCAL.



Update Dell PowerEdge LCD User String LCD Remotely, Without Rebooting, Using IPMI

The other day I noticed an(other) horribly annoying thing about Dell PowerEdge servers: nothing in the iDRAC settings that is useful to change can be changed without either a reboot or at least resetting the DRAC itself. Obviously rebooting is unacceptable for a production server, unnecessarily painful for virtualization hosts, and might totally clobber your ability to contact the DRAC after a reset if you cannot reach its default address subnet after a reset ( or something like that is the default address).

It turns out that all this mess is totally unnecessary if one just ignores the (stupid) web interface to the DRAC and the RACADM interface and instead sends straight IPMI commands with a utility like ipmitool. I found a script written by Tanner Stokes that makes the “user string” setting easy to do on the fly, and abstracts away the mess of getting hex values for each string character (IPMI can do anything, but none of it is straight forward). Here is my update of that script:


# This script changes the LCD user string on Dell machines that conform to IPMI 2.0

from subprocess import call
from sys import stderr

target_host = input ('\nEnter name or IP of target host:\n');
user_string = input('Enter LCD string:\n')

hex_string = ' '.join([hex(ord(z)) for z in user_string])

print('\nTrying to change LCD string on {0}...'.format(target_host))

    retcode = call('/usr/sbin/ipmitool -H {0} -I lan -U root raw 0x6 0x58 193 0 0 {1} {2}'.format(target_host, str(len(user_string)), hex_string), shell=True)
    if retcode == 0:
    elif retcode < 0:
        print('Terminated by signal', -retcode, file=stderr)
        print('Oops! Returned', retcode, file=stderr)
except OSError as e:
    print('Failed with error:', e, file=stderr)

# The following supposedly sets the user string to show on the LCD, but
# is still broken (probably wrong function number) -CRE

# retcode = call('/usr/sbin/ipmitool -H {0} -I lan -U root raw 0x6 0x58 194 0'.format(target_host))

# Changelog:
# Tanner Stokes - tannr.com - 2010-02-26
#  * Original author
# Craig Everett <zxq9@zxq9.com> 2013-12-13
#  * Update to Python3
#  * Pythonification
#  * Cosmetic changes

It would be better if it were wrapped in an optional main(), accepted arguments for the target_host and user_string, and accepted the name of an input file to go through — but I’m not excited enough just now to do that. In any case, thanks, Tanner!

Mental Overhead

I haven’t had any reason to write assembly by hand for quite a while, but the other day a deep hardware geek friend of mine asked for an opinion on an instruction set for an architecture he is working on — and of course that means using his assembly instructions directly.

I had forgotten what that is like. It is common today to hear people who have never written anything in assembly put C in the same category as assembly, having themselves heard that C is “low level”. C is certainly lower level than, say, Python or Erlang, but its a far cry from assembly. Saying “low level” isn’t the same thing as saying “hardware level” or “lower than…”. Abstractions are always relative to the level of your problem of the moment.

Perhaps C is metaphorically comparable to assembly if you are programming in userland and your major concerns are stretching background images or including on-click sound effects or whatever. But that doesn’t compare to assembly. What is most interesting about C is that compilers can be written that abstract away the quirks of different hardware instruction sets and make programs magically portable.

Of course, the phrase “portable” is in the same boat as the phrase “low level” these days. If its not interpreted or compiled to bytecode (or in the case of the ultra ignorant, if its not Java) then folks who have no experience in compiled languages will think you must be mistaken for using the phrase “portable”. Some more Java-hype driven misconceptions that haven’t yet died out*.

After a long stint in the world of garbage collection and either extremely strong typing or completely dynamic typing it is funny to think of everything as an integer again. Assembly is not a bad way to deal with hardware-level problems (provided the instruction set doesn’t suck). A good set of machine instructions is precisely the sort of thing needed to solve the sort of problems you encounter when dealing with specific device behaviors or, say, bootstrapping a compiler for a simple language like C. An assembly with a decent syntax provides mnemonic devices in a way that makes dealing with those machine instructions enormously easier. (But I wouldn’t try writing a graphical MMORPG in pure assembly.)

C is excellent for providing abstractions at the level required for portable systems or limited-feature/limited-aspect general programming (and I still wouldn’t try writing a graphical MMORPG in pure C). Building abstractions up from there is a sensible thing to do, of course, and this permits us to stack abstractions up high enough that we can almost talk in human(ish) terms about human problems with computers and do neat things. Deciding whether a particular tool or form of expression is appropriate for a particular set of problems is all about the nature and level of the problems and is rarely a judgment on the tools themselves.

This is why I don’t like monolingual or even single-paradigm programmers. They have been tricked into thinking that mental overhead is a fixed quantity, a universal constant from which all other difficulty and complexity is derived. This is a handicap which can only be surmounted by forcing oneself to explore outside the comfort zone provided by FavLangX or FavParadigmY.

[* What is funny about that is C is far more portable than Java. "Compile once, run anywhere" doesn't actually solve a problem that anybody has in the real world. It is trivial to compile a program once per architecture it will run on, or even once per platform. After all, that is how the runtime environments for interpreted/bytecode languages get published in the first place.]

Keyboards, Machine Guns, and Other Daily Tools

It looks like I’ll be at least occasionally moving between my home in Japan and offices in the US where I may wind up setting up a system for myself to use while I’m there (I’m not a huge laptop fan when it comes to extended work). This brings up an annoying issue: keyboard layouts.

It is difficult to find US-layout keyboards out here, so even though I usually write only a few Japanese-language emails per day its just not practical to use anything but the local flavor. Even if I did have a bunch of US-layout keyboards it would be insanely annoying to have to switch between JP-layout on laptops, server crash carts and customer systems and then switch back to US-layout when I got back to one of my offices. So I’ve gotten accustomed to this layout and it works well for me.

The main keys that do letters and numbers are all in the same place, so it seems like this wouldn’t be a big deal. The problem is the crazy keys that do “top row” and wildcard stuff like bangs, hashes, quotes, backticks, at-marks, brackets, colons, semicolons, parens, etc. All the stuff that is rarely used on, say, a student or blogger’s keyboard is usually worn smooth on a programmer’s keyboard, especially if he switches languages all day. And they are all in radically different places on JP and US layouts.

So… naturally I’ll probably just get a decent one here and keep in the closet over in the US, and whip it out whenever I show up.

But that brings up a point about familiarity and how “good” tools are from the perspective of the one using them. I could easily take the position that US-layout is poo and that JP-layout is superior. Or I could get uber nerd and pretend that some statistical study on finger reach and frequency of blahblahblah matters whatsoever when it comes to programming. It doesn’t, really. That’s to imagine that input is the hard part of programming, and its not — its figuring out what to write in the first place. So its not speed of input, per se, but smoothness of operation. More to the point, its which layout prevents the wetware halting problem: where the human doing the work has to stop what he is doing to figure out something unrelated to the essential task at hand.

But it remains true that some layouts are probably actually worse than others. It follows that other sorts of tools can fall into the realm of “good enough that preference is a matter of taste or familiarity” or in the realm of “actual poorly designed garbage”.

The reminds me of guns. There are several excellent machine gun, rifle and pistol designs employed in militaries across the world. Many of them are decent enough that while some have a slight edge over others in some areas, I’d go to work with pretty much any of them. For instance, the M4 vs. the SCAR. The SCAR is actually better, but the M4 is familiar enough to me and I have enough confidence and skill with it that I just don’t really care which one I wind up getting stuck with.

I don’t have nearly as much faith in the AK-47 as a precision weapon, especially in an environment where quick on/off safety and partial reloading is critical. They are famously resistant to neglect (which is often mistaken for durability), but that’s really a key attribute for a rifle intended for the mindless Commie Horde sweeping en masse across the tundra or untrained insurgent/freedom-fighter/terrorist whose backers need cheap trashy guns with which to arm their cheap trashy goons. Indeed, the AK-47 is in real terms less good than the SCAR or M4 and there is a whole list of rifles and carbines I would consider before going to work with one (but still, its not absolutely awful, just so much less good than the alternatives that I’d avoid it if possible — sort of like Java).

Where this is really striking is with machine guns and pistols. On the pistol side there are a rather large number of designs that actually break down frequently during use. This never happens in a James Bond movie, of course, but in real life it happens at the most inconvenient times. Come to think of it, there is never a convenient time for a pistol to break. Once again, despite the absolute superiority in design of the semi-automatic over the revolver, familiary can overcome the technical deficiencies between the two (with lots of practice) and I would actually prefer to go to work with certain revolvers over certain semi-autos. (This is to say nothing, of course, of the issue of caliber…)

With machine guns, however, the differences in good vs. bad designs are vast. In the nearly any modern military you’re absolutely spoilt. A “bad” gun is one that doesn’t have a TV built into the stock to ease the passage of long turns on security. They are mindlessly easy to load, sight, barrel change, fire, strip, clean, maneuver with, etc. The links disintegrate and can be re-used on unlinked ammo, all sorts of cool toys fit around the thing (which can, sometimes, make them start to suck just from the “too much Star Wars” problem), runaways can have their belt broken, they will eat through just about any garbage that gets caught in the links or even fire bent ammo. They aren’t even unreasonably heavy (and its patently unfair to compare it to the uber lightness of an M4). Its amazing how well these things work. But when they are all you know you start complaining about them, wishing you had a 240 when you’ve been handed an M-60 (because its possible to jam it up if you accidentally load it bolt-forward, or probably lacks a rail system, or you’re an unsufferable weakling complaining because you didn’t get the lightweight bulldog version, or whatever). I’ve had the misfortune of having to go to work with old Soviet machine guns, though, and can attest that they are indeed of almost universally horrible design.

When we say “crew served weapon” in modern armies we mean “the weapon is the centerpiece of the crew” not “this weapon is absolutely unreasonable to assign to any less than three people”. It might have meant that operating the machinery actually took a crew back when tripods included full-sized chairs, ammo came on a horse-drawn cart, and vast amounts of oil and water were consumed in operation. But that was the early 1900’s. We still employ machine guns as crew served weapons beacuse its an advantage to have an AG and actually set up a tripod if you wind up facing off against a for-real infantry force, not because its difficult to wield one. Today a single person can easily maintain and operate a 240, M-60, MAG58, 249, MG42, MG3, or whatever. Not so with, say, the PKM (or heaven forbid the SG-43). An RP-46 is actually better if you come to the field with American-style assumptions that a single person is adequate to handle a machine gun.

The PKM is not really belt fed, its chain fed, and the chain doesn’t disintegrate. Its also extremely strong. Like you can support more than a single person’s weight from a belt and it won’t break. The longer the belt the more bullets, and this seems good, until you realize that it feeds from the wrong side (the right), which prevents a right-handed shooter from feeding the pig himself with his left hand and leaves the indestructible spent chain right in front of the shooter. This means its right underfoot when running after a bit of shooting — which has made be bust my face in the dirt on the top of the gun more than once (not so convenient at interesting moments, and absolutely detrimental to my Cool Point count).

But the failure of design doesn’t stop there. That stupid belt is nearly impossible to reload by hand without wearing gloves and using a lever (box top, table top wrench, whatever) to force the rounds into the thing (yeah you might load 50 rounds by hand, but how about 5000?). They also rust instantly, in accordance with the PKM Belt Rust Time Law — however long its been since you last packed the belt is precisely how long it takes to rust exactly enough to generate a vast amount of busy work without rusting so much that the belt should be discarded. If you try oiling them to prevent that they gum up or actually start growing hair instantly. Its a never ending cycle of trying to keep the belts from making your life suck without giving up and throwing them all away. Which is why the Soviets conveniently invented a reloading machine. Which itself sucks. I can’t even begin to explain the inadequacy of this stupid machine, but it actually is the only way to maintain even a marginally reasonable reload rate for belts — but there is no way you could do this under fire, or on Tuesday (the machine jams spectacularly on random days, Tuesday tending to be the worst day for this for some magical reason).

I haven’t even begun to mention the inadequacy of the ammo crates. The standard ammo crates are insanely stupid. Actually, this isn’t a gripe reserved just for 7.62 ammo, its true for all commie ammo I’ve ever seen. The ammo cans aren’t like the infinitely reusable, universally useful, hermetically sealed, flip-top boxes found in non-backward armies. They are actually cans. Like giant soup cans, but without a pull-tab — not even a sardine-key. They come with a can opener. A huge one (but only one per crate, not one per can). You read that right, a can opener. You know, the lever-kind where you hook the grabby part onto the crimp at the top edge of the can and pull to lever the pointy part down until it makes a tiny puncture, then slide over a touch and repeat until you’ve prized and ripped a gash large enough to do your business. Let that sink in. We’re talking about an ammo can. Like with bullets that people need to do their job, hopefully sometime this year. But once you’re inside the fun just doesn’t stop — no way. The thousand or so rounds inside are in boxes of 5 or 6 or so. The can that you worked so hard to open isn’t full of pre-loaded belts. That would deprive someone of a government job somewhere and that’s just not Progressive. So inside there are dozens and dozens of tiny, crappy, flimsy little cardboard boxes, each containing a few rounds. And the rounds are individually wrapped in tissue paper.

You just can’t make this trash up. Its amazing. How on earth could such a horrible, stupid, backward constellation of designs emerge from one of the two nations to reach the Moon before the end of the 20th century?

A guy I worked with a few years ago called Mule had a theory that this was, in fact, an excellent design for a machine gun system in a Socialist military. Nobody can use it alone, so you can’t get a wild hair up your ass and get all revolutionary — you need to convince at least a platoon to get crazy with you. You employ a gazillion people not only in the loving production and hand gift-wrapping of each one of the billions of rounds of machine gun ammunition throughout the nation, you employ another gazillion or so to open and load the belts. Its the ultimate low employment figure fixer — at least until the state digests enough of itself that this becomes suddenly unsustainable, of course.

Mule’s theory was that this machine gun design — from the actual shittiness of the gun itself to the complete circus of activity which necessarily surrounds its production, maintenance and use — is a brilliant design from the perspective of the State, not the soldier, and that the aims of the two are at odds is simple the natural result of a socialist system. Mule was one of the most insightful people I’ve ever met (and I’m not being rhetorical — he really was a hidden genius).

Thinking about what he said has made me re-evaluate some of my assumptions of bad designs. Perhaps the designs are excellent — not for the end user, but for whoever is in charge of the end user. And that brings me back to thinking about just why the Java programming language is so bad, yet so prolific. Java is the PKM of the programming world. Its everywhere, it sucks, it is good for (some, Stalin-like) bosses, and the whole circus surrounding its existence just won’t ever go away. And sometimes those of us who know in painstaking detail why a 240 (or nearly anything else in common use) is better are still stuck using it to get real work done.