Tag Archives: Java

Source or Satire?

From time to time I encounter openly discoverable code that is so wild in nature that I can’t help but wonder if the author was writing a machine function or a satirical statement.

Groovy source: ArrayUtil.java

After spending a few days plowing through Java code at the outset of a new Android project I found myself checking around for practical alternatives. In the course of that search (which netted Scala, Groovy and Clojure, in descending order of easy tooling for Android) I stumbled across this gem of a source file in the Groovy codebase. At first I couldn’t really tell if this was a joke about Java’s expressiveness or a functioning bit of code, but then I realized it is actually both — all the more funny because its expressing a cumbersome optimization that will execute on the same JVM either way:

ArrayUtil.java

Breach: A browser as practical satire

Someone from the Erlang world was kind enough to paste a link to Breach — a browser written in node.js. Its so full of meta fail and manifests the very essence of hipster circular logic that… I can only assume it is satire in the same vein as INTERCAL.

breach.cc

IBM SDK for Node.js on System Z

The going question at IBM has, for the last few decades at least, not been “Is it a good idea?” but “Are people deluded enough to pay for this?” This stands in heavy contrast to the countless bouts of genius that have peppered their research and development over the last century.

But IBM is a business, after all, and we’ve all got to eat. IBM was late recognizing that the majority of programmers and other IT professionals had left the world of engineering behind for the greener pastures of pop culture and fansterish tech propaganda, and to play catch up IBM had to innovate. Actually, this is a sort of business genius: IBM realized that tech doesn’t sell as well as bullshit and buzz when it watched Motorola get steamrolled by Intel’s marketing efforts around the original 286.

Intel pitched a bad chip design to tech illiterate execs, deliberately avoiding customers’ engineering departments, and prevailed against Motorola’s vastly superior designs — brilliant, if you don’t mind being a charlatan. Fortunately, Intel has at least occasionally made up for it since then (having been the only ones serious about SSD reliability early on, for example).

IBM has gone one further. I think this must have started as a practical joke at IBM, and then someone realized “Wait! Holy shit! This could be the hipster coup of the century and get us back in the web game!” Har har har. Joke’s on… all of us.

IBM SDK for Node.js v1.1 on System Z

“Process scope variables” in Erlang

I couldn’t have written a more concise satire-by-demonstration on what sucks about bringing the Java-style OOP thinking process as mental/emotional baggage when one starts using Erlang than this short message that appeared on the Erlang-Questions mailing list one day:

> So much time spent for removing one State variable from a few function calls.

much more time will be saved when refactoring from now on.

imagine:
most funs will now have signature:

-spec a(pid()) -> ok.

and most function bodies will look like:
check(Pid),
update(Pid),
return(Pid).

it will be a breeze now.

Like… whoa. That just happened. And it wasn’t sarcastic.

A gloriously sarcastic response can be found here.

Mental Overhead

I haven’t had any reason to write assembly by hand for quite a while, but the other day a deep hardware geek friend of mine asked for an opinion on an instruction set for an architecture he is working on — and of course that means using his assembly instructions directly.

I had forgotten what that is like. It is common today to hear people who have never written anything in assembly put C in the same category as assembly, having themselves heard that C is “low level”. C is certainly lower level than, say, Python or Erlang, but its a far cry from assembly. Saying “low level” isn’t the same thing as saying “hardware level” or “lower than…”. Abstractions are always relative to the level of your problem of the moment.

Perhaps C is metaphorically comparable to assembly if you are programming in userland and your major concerns are stretching background images or including on-click sound effects or whatever. But that doesn’t compare to assembly. What is most interesting about C is that compilers can be written that abstract away the quirks of different hardware instruction sets and make programs magically portable.

Of course, the phrase “portable” is in the same boat as the phrase “low level” these days. If its not interpreted or compiled to bytecode (or in the case of the ultra ignorant, if its not Java) then folks who have no experience in compiled languages will think you must be mistaken for using the phrase “portable”. Some more Java-hype driven misconceptions that haven’t yet died out*.

After a long stint in the world of garbage collection and either extremely strong typing or completely dynamic typing it is funny to think of everything as an integer again. Assembly is not a bad way to deal with hardware-level problems (provided the instruction set doesn’t suck). A good set of machine instructions is precisely the sort of thing needed to solve the sort of problems you encounter when dealing with specific device behaviors or, say, bootstrapping a compiler for a simple language like C. An assembly with a decent syntax provides mnemonic devices in a way that makes dealing with those machine instructions enormously easier. (But I wouldn’t try writing a graphical MMORPG in pure assembly.)

C is excellent for providing abstractions at the level required for portable systems or limited-feature/limited-aspect general programming (and I still wouldn’t try writing a graphical MMORPG in pure C). Building abstractions up from there is a sensible thing to do, of course, and this permits us to stack abstractions up high enough that we can almost talk in human(ish) terms about human problems with computers and do neat things. Deciding whether a particular tool or form of expression is appropriate for a particular set of problems is all about the nature and level of the problems and is rarely a judgment on the tools themselves.

This is why I don’t like monolingual or even single-paradigm programmers. They have been tricked into thinking that mental overhead is a fixed quantity, a universal constant from which all other difficulty and complexity is derived. This is a handicap which can only be surmounted by forcing oneself to explore outside the comfort zone provided by FavLangX or FavParadigmY.

[* What is funny about that is C is far more portable than Java. “Compile once, run anywhere” doesn’t actually solve a problem that anybody has in the real world. It is trivial to compile a program once per architecture it will run on, or even once per platform. After all, that is how the runtime environments for interpreted/bytecode languages get published in the first place.]

Keyboards, Machine Guns, and Other Daily Tools

I’ve got an annoying issue on my mind: keyboard layouts. This is brought on by the recurring need for me to travel to places where two different keyboard layouts are common and other programmers look at you funny if you have to glance down to make sure whether you’re hitting ‘@’ or ‘`’ or ‘”‘ or whatever is there just then. Neither of these regions, of course, use the keyboard layout of my region (Japan).

I was born in the US and grew up with that layout, but it is difficult to find US-layout keyboards here. Though I usually write only a few Japanese-language emails per day its just not practical to use anything but the local flavor — especially since I use かな input for Japanese, not the insane (to me) Romaji system so many people are accustomed to. Even if I did have a bunch of US-layout keyboards it would be crazy to switch among JP-layout laptops,  US-layout crash carts, JP-layout customer systems and US-layout workstations at my offices. So the “when in Rome” rule is in play and I’ve grown accustomed to this layout. It now works well for me.

“But what’s the big deal?” you may ask. Well, the main keys that do Latin letters and numbers are all generally in the same place, so it seems like this wouldn’t be a big deal. The problem is the crazy keys that do “top row” and wildcard stuff like bangs, hashes, quotes, backticks, at-marks, brackets, colons, semicolons, parens, etc. (Not to mention the Japanese-specific stuff like mode changes, 全角・半角, and other nonsense the majority of the world is spared.) All the parts of a keyboard that are pristine and rarely used on a typical email/pr0nz/games user’s keyboard are usually worn smooth on a programmer’s keyboard. Those frequent-use weird keys are the one that seem to be completely, unexplainably jumbled around when you compare US, JP and various European keyboards (so why even have different layouts…?).

But that brings up a point about how personal familiarity and user expectations play into the perception of a given tool as being “good” or “bad”. I grew up on US keyboards; I am intimately aware that they are not “bad” tools; they are bad tools for me right now. The idea that a concept being “easy” is not the same as it actually being “simple” intersects with the idea that a tool being perceived as “good” is not the same as it being objectively “better” than some given alternative.

(It is interesting to note that even the descriptive language must change between the two cases above. Though the basic pattern of thought is similar, “concepts” and “physical tools” are different in ways that magically prevent certain forms of direct comparison.)

I could easily take the position that US-layout is poo and that JP-layout is superior. I could go uber nerd and pretend that some statistical study on finger reach, frequency of use, angle of finger motion, key tension and travel variance, etc. matters when it comes to programming. (Programmers who use Dvorak do this all the time. I always wonder if they take themselves seriously, are consciously crossing the satire-reality divide as a joke, or if they have a getting-punched-in-the-face fetish.) In the case of programming layout doesn’t really matter — consistency does. To imagine that the charater-per-second count of input matters in programming is to imagine that input is the hard part of programming. Its not*. The problem is figuring out what to write in the first place.

[* Unless you are a Java or COBOL programmer. I didn’t realize how intensely verbose either of those were until I dealt with both again recently after years of absence.]

More to the point, what matters is which layout prevents the wetware halting problem in a specific person. When the human doing the work has to stop what he is doing to figure out something unrelated to the essential task at hand then he’s distracted and that sucks, and you have a wetware halting problem.

But it is still almost certainly true that some layouts are probably objectively worse or better than others for intense input jobs that are not intensely creative, like transcription. If that wasn’t true then the stenograph would never have taken off and court reporters would have been just happily flying through rapid-fire legal discourse in an uncompressed, fully textual medium on their typewriters — which would have been quite a thing to witness, actually. But they didn’t because the stenograph was objectively better than the typewriter for this. It follows that other sorts of tools can often be judged against one another in the same way.

Languages can be like that. Not the ones we speak, I mean real languages, the one you speak math to your computer with. The problem with judging that is that there are several dimensions to a programming language. They usually center around three themes: some concept of “safety features”, and some other concepts of “operational features”, and some other concepts of syntax-as-a-feature.

Safety is a bit of a mixed bag, because few folks seem to declare what they want from a language in terms of safety up-front, and rarely define whether it should be a feature of the language, the runtime or the compiler (until after the fact, of course, which is when everyone bitches about whatever the design decision turned out to be). There is the “safety from others” aspect, which The Management loves because they believe it permits them to demand that different elements of a program become invisible to other elements (the whole `public static blahblah` bit in Java). There is the type safety angle, which people are either really loud about or have nothing to say about (probably because once type systems start making sense to you you’re prone to freaking out about it for about a month). There is the runtime safety angle (Three competing approaches: “It will never run if its unsafe.” VS “Its OK if it crashes.” VS “Nothing is safe and it will always run but gradually grow more magical and mysterious over time and you will never really know bwahahahahaha!”).

Operational features are sort of hard to judge, though it seems like this would be easier somehow than the safety tradeoffs. I think this is because language design is still in its infancy and we’re still totally turned around backwards about the idea that “computer science” has anything to do with computers. So the language feature problem turns into the problem of finding the Goldilocks point of “just enough of the right hot features mixed in with the cool familiar ones to get work done” as opposed to having way too few or way too many features to make much sense to use in production.

Judging syntax is totally arbitrary — until you have to come back and read your own code after two years of not having seen it. There are sometimes languages X and Y where both have the same major features but the syntax of one makes it look more like line noise than code. Meh. This is why language arguments are never going to end (so I tend to have runtime arguments instead).

That’s a lot of dimensions. With all that in mind, I personally dislike Java. I dislike some of the things the JVM assumes should be true. I dislike that its taught first to people who really should learn more about the nature of programming first. I… ugh, I just really dislike it as a platform. It solved a certain sort of problem we almost sorta had in the 90’s, but now its just causing problems — and yet as an industry we are too brainwashed to even spot them.

And, to make this post meander even further, that reminds me of guns. No, seriously. There are several excellent machine gun, rifle and pistol designs employed in militaries across the world. Many of them are decent enough that, while some have a slight edge over others in some areas, I’d go to work with pretty much any of them.

For example: the M4 vs. the SCAR. Meh. The SCAR is indeed objectively better, but the M4 is familiar enough to me that I just don’t really care which one I wind up getting stuck with. On the other hand, I don’t have nearly as much faith in the AK-47, especially in an environment where precision, reaction time, quick on/off safety and partial reloading are critical. While they are famously resistant to neglect (which is often mistaken for durability) that’s really a key attribute for a rifle intended for the Mindless Commie Horde or the Untrained And Unwashed Mass Formation of insurgent/freedom-fighter/terrorist whose backers need cheap, trashy guns with which to arm their cheap, trashy goons. Indeed, the AK-47 is in real terms dramatically less good than the SCAR or M4 and there is a whole list of rifles and carbines I would consider before going to work with one. (That said it is not absolutely awful, just so much less good than the alternatives that I’d avoid it if possible — sort of like Java. I mean, at least its not the C++ of guns.)

Where this is really striking is with machine guns and pistols. On the pistol side there are a rather large number of designs that actually break down frequently during heavy use (granted, “heavy use” meant something very different for me than most people who don’t shoot for a living). This never happens in a James Bond movie, of course, but in real life it happens at the most inconvenient times. Come to think of it, there is never a convenient time for a pistol to break because usually if you’re using your pistol it means your real weapons already broke. Once again, despite the absolute superiority in design of the semi-automatic over the revolver, familiarity can overcome the technical deficiencies between the two (with lots of practice) and I would actually prefer to go to work with certain revolvers over certain semi-autos. (This is to say nothing, of course, of the issue of caliber…)

With machine guns, however, the differences in good vs. bad designs are vast. In nearly any modern military you’re absolutely spoilt. A “bad” gun is one that doesn’t have a TV built into the stock to ease the passage of long turns on security and automatically arrange for pizza to be delivered to an 8-digit grid. They are mindlessly easy to load, sight, barrel change, fire, strip, clean, carry, etc. The links disintegrate and can be re-used on unlinked ammo, all sorts of cool toys fit around the thing (which can, sometimes, make them start to suck just from the “too much Star Wars” problem), runaways can have their belt broken, they will eat through just about any garbage that gets caught in the links or even fire bent ammo, and they probably prevent cancer. They aren’t even unreasonably heavy (and its patently unfair to compare it to the uber lightness of an M4). Its amazing how well these things work. But when great machine guns are all you know you start complaining about them, wishing you had a 240 when you’ve been handed an M60 (because its possible to jam it up if you accidentally load it bolt-forward, usually lacks a rail system, or you’re an unsufferable weakling and won’t stop bitching because you didn’t get the lightweight bulldog version).

I’ve had the misfortune of having to go to work with old Soviet machine guns, though, and can attest that they are indeed objectively horrible.

When we say “crew served weapon” in modern armies we mean “the weapon is the centerpiece of the crew” not “this weapon is absolutely unreasonable to assign to any less than three people”. The term “crew served” may have had more semantic purpose back when operating the machinery actually took a crew — back in the days when tripods included full-sized chairs, ammo came on a horse-drawn cart, and vast amounts of oil and water were consumed in operation. But that was the early 1900’s. We still employ machine guns as “crew served weapons” because it is generally advantageous to have an AG with you and usually a good idea to actually set up a tripod if you find yourself facing off against a for-real infantry force. That is completely different than calling it “crew served” because wielding one is equivalent in complexity to running a mortar section.

Today a single person can easily maintain and operate an M240, M60, MAG58, 249, MG42, MG3, MG11, or whatever. Not so with, say, the PKM (or heaven forbid the SG-43). An RP-46 is actually better if you come to the field with American-style assumptions that a single person is adequate to handle a machine gun (the “zomg! PKM!” not-a-Soviet fanboi hype may have infected your brain already though and make this sound like a crazy statement — until you actually try both).

Let’s clear one point of nomenclature up straight away. The PKM is not really belt fed, it is chain fed, and the chain doesn’t disintegrate. Its also extremely strong. “Strong” as in you can support more than a single person’s weight from an empty belt. The longer the belt the more bullets, and this seems good at first, until you realize that holy shit it feeds from the wrong side (the right). This prevents a right-handed shooter from feeding the pig himself with his left hand and leaves the indestructible spent chain right in front of the shooter, or rather tangled around his feet the moment he has to get up and move which you have to do pretty much all the time because gun fights involve a lot more running around than shooting.

This little design whoopsie! has made be bust my face in the dirt or on the top of the gun more than once — not so convenient at interesting moments, and absolutely detrimental to my Cool Point count. Being at a tactical disadvantage and almost getting killed is one thing, but people actually saw that and it was embarrassing. Every. Goddamn. Time. It also hurts pretty bad (like, my feelings!).

But the failure of design doesn’t stop there. That stupid belt is nearly impossible to reload by hand without wearing gloves and using a lever to force the rounds into the thing. You have to at least find yourself some gloves and a stiff metal boxtop, wrench, ancient steel desk or something else firm that the butt of the round casings won’t slip too easily on to force those stupid rounds into their fully-enclosing holes. (And to you, the guy reading this who is thinking “I went to the range once and loaded a brand new, totally clean, uncorroded, never-been-fired-or-stepped-on belt — didn’t seem too hard to me”: sure, you might load 50 rounds into a pristine, lubricated belt by hand, but how about 5000 into a hundred chewed up belts?).

They also rust instantly, in accordance with the PKM Belt Rust Time Law: the time for a belt to rust to the point that it will malfunction if not serviced immediately, but not enough for it to be replaced by management will be exactly the amount of time since you last placed it in a sealed container and now. (Go check right now — you’ll see that this rule somehow always holds.) If you try oiling them to prevent that they gum up or actually start “growing hair” instantly. Its a never ending cycle of trying to keep the belts from making your life suck without giving up, throwing them all away and resolving to fight by using your bad breath and attitude alone.

This brings me to why the Soviets conveniently invented a reloading machine. Which also conveniently sucks. (Compare.) I can’t even begin to explain the inadequacy of this stupid machine, but it actually is the only way to maintain even a marginally reasonable reload rate for belts. There is, however, no way you could do this under fire. Or on Tuesday. (The machine jams spectacularly at random, Tuesday tending to be the worst day.)

I haven’t even begun to mention the inadequacy of the ammo crates. The standard ammo crates are insanely stupid. Actually, this isn’t a gripe reserved just for 7.62 ammo, its true for all commie ammo I’ve ever seen. The ammo cans aren’t like the infinitely reusable, universally useful, hermetically sealed, flip-top boxes found in non-backward armies. They are actually cans. Like giant spam cans, but without a pull-tab — not even a sardine-key. They come with a can opener. A huge one (but only one per crate, not one per can). You read that right, a can opener — and only one for the whoooole crate, so you better hope Jeeter doesn’t drop it in the canal.

You may be thinking “Oh, a can opener, I’ve got one of those that plugs into the wall, how could Jeeter drop such a thing in the canal?!” Glad you asked. When I say “can opener” I don’t mean the first-world type. I mean the part of your boyscout era Swiss Army Knife you never realized had an actual use. You know, the lever-kind where you hook the grabby part onto the crimp at the top edge of the can and pull to lever the pointy part down until it makes a tiny puncture, then slide over a touch and repeat until you’ve prized and ripped a gash large enough to do your business. Let that sink in.

(Here is a video of a guy opening one to illustrate… WTF. And that’s a really nice crate in pristine condition — which you will never, ever see while on contract in, say, Nigeria.)

Now remember, we’re talking about an ammo can. Like with bullets that people need to do their job, hopefully sometime this year, but perhaps much sooner under the sort of conditions that make calmly remembering details like where the fscking can opener is very difficult.

Once you’re inside the fun just doesn’t stop — no way. The thousand or so rounds inside are in boxes of 5 or 6 or so. Well, “boxes”, what I really mean is squarishly shaped topographic nets made of some material that tenuously holds together well enough to almost seem like a paper packing material — so it loosely resembles a little paper box. So this means that, unlike what you would have come to expect from armies in nations where maximizing busywork employment wasn’t the main and only goal, the can that you worked so hard to open isn’t full of pre-loaded belts. That would deprive someone of a government job somewhere and that’s just not Progressive. So inside there are dozens and dozens of those tiny, crappy, flimsy little cardboard boxes, each containing a few rounds. And before you start worrying that those stupid boxes are the end of the show, let me assure you that the party just keeps rolling along — each round is individually wrapped in tissue paper.

You just can’t make this shit up. Its amazing. How on earth could such a horrible, stupid, backward constellation of designs emerge from one of the two nations to achieve serious, manned spaceflight before the end of the 20th century? GHAH! Had WWIII ever occurred I wouldn’t have known whether to snicker and lay scunion, secure in my knowledge of how thoroughly the enemy had hamstrung himself, or feel pity and offer a face-saving peace deal to spare the poor saps who are forced to try to actually get anything done under these conditions.

But maybe its brilliant

A guy I worked with a few years ago called Mule had a theory that this was, in fact, an excellent design for a machine gun system in a Marxist paradise. His reasoning went something like this:

  • Nobody can use it alone, so you can’t get a wild hair up your ass and get all revolutionary — you need to convince at least a platoon to get crazy with you.
  • You employ a gazillion people not only in the production of a billion lovingly gift-wrapped-by-hand rounds of machine gun ammunition throughout the nation, you employ another gazillion or so to open and load the belts.
  • Its the ultimate low employment figure fixer — at least until the state digests enough of itself that this becomes suddenly unsustainable, of course (but that never stopped a socialist, despite the ever-lengthening list of failed socialist states).
  • You never really intended to go to actual war with the Americans in the first place (wtf, are you crazy?!?), so what you really needed was a police weapon of deliberately limited utility with which to suppress political dissent, not an actual infantry weapon of maximal utility under all conditions.

Mule’s theory was that this machine gun design — from the actual shittiness of the gun itself to the complete circus of activity which necessarily surrounds its production, maintenance and use — is a brilliant design from the perspective of the State, not the soldier. Furthermore, that the aims of the two are at odds is simply the natural outcome of being produced in a socialist/Marxist system. Mule was one of the most insightful people I’ve ever met (and I’m not being rhetorical — he really was a hidden genius).

Thinking about what he said has made me re-evaluate some of my assumptions of bad design. Perhaps the “bad” designs are excellent — not for the end user, but for whoever is in charge of the end user. And that brings me back to thinking about just why the Java programming language is so bad, yet so prolific, and how perhaps the design of Java is every bit as brilliant as the design of a good keyboard, differing in that each is brilliant from diametrically opposed points of view.

Java is the PKM of the programming world. Its everywhere, it sucks, it is good for some (Stalin-like) bosses, and the whole circus surrounding its existence just won’t ever go away. And sometimes those of us who know in painstaking detail why a 240 (or nearly anything else in common use) is better are still stuck using it to get real work done.

I should, perhaps, write another edition of this interdisciplinary mental expedition in text focused around the failings of JavaScript (oh, excuuuuuse me, princess, “ECMA Script”). Or Ruby. Or MySQL. Or SQL itself. Or HTTP. Come to think of it, subjects of my angry dissatisfaction abound.

Don’t Get Class Happy

If you find yourself writing a class and you can’t explain the difference between the class and an instance of that class, just stop. You should be writing a function.

This antipattern — needless classes everywhere, for everything — is driving me crazy. Lately I see it in Python and Ruby a good bit, where it really shouldn’t occur. I feel like its a mental contagion that has jumped species from Java to other languages.

In particular I see classes used as inter-module namespaces, which is odd since its not like there is a tendency to run out of meaningful names within a single source file (of reasonable length). Importing the module from outside can either import * from foo,or import foo, or from foo import filename, or even import foo as bar (ah ha!) and make flexible use of namespacing where it is needed most — in the immediate vicinity of use.

So don’t write useless, meaningless, fluffy, non-state&behavior classes. Any time you can avoid writing classes, just don’t do it. Write a function. Its good discipline to see how far you can get writing zero classes in an implementation, and then write it with some classes and see which is actually less complex/more readable. In any case writing two solutions will give you a chance to examine the problem from multiple angles, and that’s nearly always an effort that results in better code since it forces you to value grokness over hacking together a few lines that sort of cover the need for the moment (and forgetting to ever come back and do it right).

Or maybe I should be admonishing people to seek flatness in namespaces, since this implies not writing classes that are stateless containers of a bunch of function definitions which incidentally get referred to as “methods” even though they don’t belong to a meaningful object in the first place.

Or maybe I should be railing against the oxymoronic concept of “stateless objects/classes”.

Or maybe I should be screaming about keeping our programming vocabulary straight so that we can finally say what we mean and mean what we say. (Sound advice in all walks of life.)

This last idea is perhaps the most indirect yet powerful of all. But it implies “standards” and so far in my experience “standards” and “computing” don’t tend to turn out very well once we get beyond, say, TCP/IP or ANSI and Unicode.

Language Preachers: A Language’s Worst Enemy

Nothing crushes a language’s chance at becoming popular like the activities of its political and religious agitators. It doesn’t matter if the situation of the language is that its on the rise, on the decline or on the rebound. Language preachers suck, always. The only chance a language has for a successful PR campaign is when that campaign is based completely on lies, is extremely well funded, and appears to provide a remedy-by-aversion to the sort of problems incompetent instructors and consultants can’t think through and yet still encourages the sale of completely abstract content about abstractions without actually having to write useful code (Java). It doesn’t matter if this campaign is a complete farce, it’ll work if it permits enough of the stupid to perceive themselves as now smart enough because it can generate an entire sub-industry based on trying to prove that something can be done in this newly minted language for the incompetent — a heavy side of brand new buzz words helps, too (if only the “cloud” had been a language…).

But that’s just Java. Other languages don’t have the benefit of an academic revolution to assist in shoving a particular interpretation of a single programming paradigm down our throats (complete with a language that enforces just that one paradigm). Most languages just have to make it on their own either on merit (Haskell, a few lisps, Python), be the language of the year’s hottest killer app (PHP and the original web, Ruby and Rails), be the language of a killer app (Javascript and Mozilla/Firefox), or be both strong on merits and be the mother of a whole family of killer apps (C being the obvious example of the closest thing to modern immortality).

Recently I’ve been involved in a few discussions about the relative merits of a few different languages, database systems, distro flavors, kernel details, etc. Of all these sopics the top two likely start a religious war are choice of language and choice of database. My made-up estimate is that language discussions are twice as likely to incite digital jihad than database discussions.

Maybe it is because we spend most of our time dealing directly in a language and actually thinking in that language, so it is internalized to a level nothing else is. I don’t “think” in LibreOffice (though I’ve seen a few people who can think in Calc functions and others in Excel functions — and I hope the two types never meet to talk about it), I don’t carry on an internal monologue of sorts in Firefox or Nautilus or Gnome — but I do talk to myself in a sort of visual way in Python, Guile, Haskell, C, Bash, Perl, plpgsql, opcodes and a few other languages when the need arises. Programmers are prone to do this — and it turns out that they are most likely to do it only in a single language, ever (pick one of Java, Perl, Javascript, PHP, or Ruby for the best religious wars).

To someone who doesn’t speak enough languages to have evolved a sort of quiet disdain for them all a weird form of moral relativism seems to pervade. All languages other than the one they already know well are at once “the best for some job, just depends on what you’re doing” and completely inferior at everything compared to $the_only_lang_i_know_well. And of course the complete n00b who doesn’t even know one language well yet will swear that LangX — which he will never take the time to study but will take the time to flood IRC with airy questions about — is the cure for cancer, war and was handed down by Mayan priests without actually knowing anything about it. Obviously there is a conflict here.

Most languages suck, and some suck far less than others. A very few suck so much less that they are almost good. That list is very short. The list of languages I think to myself in regularly is a mix of languages that I don’t completely despise and ones that I do despise but must use because useful projects are already written in them. It used to grind on me that I had to use inferior tools so often, but I’ve gotten past by accepting that most programmers are either really bad at what they do or are compelled by economic circumstances to abandon both their code and the quest for decent tools prematurely. Meh.

The last few years I’ve noticed a lot of Perl people getting zealous about their language. I’m not writing to bash Perl or its community — different communities go through their own inner social cycles as well. But the last six years or so has seen Perl people transition from a group that still self-assuredly represented the original web/CGI hackers and were held in guruly esteem to a group that is panicked about the low percentage of new projects written in Perl compared to the late 90’s and early 00’s. They’ve gone from a group that is solving problems to a group that is actively trying to tell everyone that Perl is still the thing to use, that its still hip and cool and better than everything else out there.

This is hurting Perl more than it is helping. Perl is indeed a good language, but the days when it was the de facto standard for a large subset of new project types is over. Panicking about it won’t help — but writing useful programs sure might. Today there are many alternatives that are at least as good as Perl ever was and many that are just better. Algol was and still is a great language, but I’m not about to start a new project in it. C is just better, and so far hasn’t been surpassed in that particular space (though its time may come, too). I make the decision to not use Algol out of an informed regard for the difficulty of maintaining Algol code in 2012 versus maintaining other code in 2012. If someone were constantly bombarding me with pro-Algol propaganda and biased trash benchmark comparisons then I would avoid Algol on principle instead of even learning enough about it to know whether or not it could have possibly been a good choice to begin with.

If Perl is the awzumest evar, then there is no need to proselytize. If it is on the decline, then there is a huge need to defend the temple — and this is the perception I have of this sort of behavior. It makes Perl people sound like Java people when it was a crap language trying to gain some mindshare — though in defense of that tactic Java did take over the shallow end of the pool, and that turned out to be the largest part of the pool by a vast margin.

The the Lisp community screwed itself by engaging in Java-style cheerleading for a time. This drove a wedge between the people who really could have benefitted from learning it and the community of language snobs who were already privy to The Great Mystery. The Postgres community was that way as well until very recently. I’m a card carrying lisper myself, but I admit the community just got retarded when it came to interacting with others. The only thing that has redeemed Lisp in recent years is that some of the best commercial Lisp developers also happen to be excellent writers. If it weren’t for that there would have been no revival because the inner community mixed with the outer community like water and oil over this language preaching business.

Perl is a genuinely good language and it has a genuinely good runtime (and the two are totally different issues). If it is fading, let it fade with majesty as one of the traditional uber-hacker languages of yore, bringing out your Perl magic as a way to intrigue and enlighten n00bs instead of making the word “Perl” synonymous with a noisy cult that is more interested in evangelizing than writing solutions to problems.

Don’t make the same mistake the Lispers did when they felt their language was threatened — its the difference between newcomers deliberately avoiding Perl instead of not having tried it yet.