Tag Archives: Computing

Fred Brooks: Design Requirements

This overlooked resource is a talk that Fred Brooks gave to a very small audience June 4, 2007 — the first day of the WSOM Design Requirements Workshop. He does not discuss requirements or elicitation of requirements (which is what everybody else talked about) but rather discusses design as a process and how that is inseparable from the process of divining requirements. His primary points are:

  • You can’t know what you want to build until you already have something concrete. Everything is a throw-away until it is “good enough”.
  • Since you can’t know what you want to build, you don’t know the requirements, even after you start work. This indicates that the process of design and the process of divining requirements cannot be treated separately the way that academic literature treats the subjects.
  • The design process sometimes occurs in a more complex environment than in past years because of the late 20th century concept of “collaborative design”. Further, this is not the best design model for every project.

Unlike many other Fred Brooks talks there is a bit of audience interruption — sometimes with valid points, sometimes not — and you can observe both Fred’s unscripted response as well as others in the audience responding to audience comments and questions. It is interesting from a human perspective for this reason as well.

Racing to remove the last Nix

This post was prompted by a discussion on ScientificLinuxForum. The subject of this post diverts significantly from the original discussion, so I’ve placed it here instead. The thread was initially about the release of RHEL 6.3, but discussions there have a tendency to wander, particularly since many are worried we are in the last days of free computing with the advent of UEFI lock-down, DRM-Everything and new laws which prevent the digital equivalent of changing your own oil, but this post just doesn’t belong in the thread and may be of interest to a more general audience.

Unix philosophy is shifting. We can see it everywhere. Not too long ago on a Fedora development list an exchange equivalent to:

“I wanna do X, Y, and Z this new way I just made up and everyone says its bad. Why?”
“It breaks with everything Unix has done for 40 years that is known to work.”
“I don’t care what Unix has done. I want to make it work this way instead.”
“Its insecure.”
“ummm… oh…”
“Besides, it introduces binary settings so the user can’t adjust and fix them manually if the system goes to poo. So users can’t write scripts to change their settings without going through an API you’ve yet to even consider writing causing more work for everyone, and at the same time security is going to suffer. Besides, telling someone to reinstall their OS because one file got corrupted is not acceptable by our way of thinking.”
“uhhhhh… oooh?”

Let me be clear, there is the world of Unixy operating systems, there is the Unix design philosophy, and then there is the Unix religion. Part of the fun in a flame war is detailing how your opponent is a proponent of whatever part of the spectrum would most undermine their position at the time (usually the religion accusation is thrown, unless someone makes a dive straight to Nazism). The problem with dividing the world of grown-up operating systems into three stripes that way, though, is that it misses why a religion evolved in the first place.

Religion is all about belief, in particular a belief in what is safe and reliable. If I don’t offend God I’m more likely to get into Heaven — that’s safe and reliable. If I don’t give every process the ability to write arbitrarily then I’m less likely to have problems — that’s safe and reliable. Whatever God is up to I’m not really sure, he hasn’t let me in on it all, but that restrictions to write access prevent things like a rogue process (malicious, buggy or deliciously buggy) from DoS’ing the system by filling it up with garbage is something I can understand.

But not everyone can understand that, just like I can’t understand God. That’s why we have guidelines. er, religions. The fun part about the Unix religion is that its got a cultish flair, but the most functional part about it is that its effects can be measured and generally proved (heuristically or logically if not formally) to be better or worse for system performance and service provision.

It is good to question “why” and be a rebel every so often, but you’ve got to have a point to your asking and you’ve got to be prepared to hear things you may not have expected — like the response “Its insecure” which may be followed by an ego-demolishing demonstration. But people don’t like having their egos demolished and they certainly hate studying up on new-things-that-are-actually-old and yet still adore the question “why” because it sounds so revolutionary and forward-thinking.

But IT people are educated, right? They are good at dealing with detailed situations and evaluating courses of action before committing to this or that plan, right? Its all about design, right?

I’m here to tell you that we’ve got problems.

We are absorbing, over time, less talented and grossly inexperienced developers across all of techdom. It started with tossing C in favor of Java, and now even that in favor of Ruby in some places because its like “easier Java… and… hey, Rails!”. (This isn’t to say that Ruby is a bad language, but certainly that it shouldn’t be the only one you know or even the first one you learn.) Almost no universities treat hard math or electrical engineering courses as a prerequisite for computer science any more. In fact, the whole concept of putting hard classes first to wash out the stupid or unmotivated has nearly evaporated. This is not just in CS courses, but the dive has been particularly steep there. These days, as ironic as it may seem, the average programmer coming from school knows next to nothing about what is happening within the actual machine whereas a hobbyist or engineer coming from another field who is fascinated with machine computation understands quite a bit about such things.

Part of it probably has a lot to do with motivation. A large percentage of university students are on a conscious quest for paper, not knowledge, and want to get rich by copying what is now an old idea. That is, they all dream of building the next Facebook (sorry, can’t happen, Facebook will version up; at best you might get hired by them, loser). On the other hand every hobbyist or out-field engineer who spends personal time studying sticky problems in computer science on their own time is genuinely interested in the discipline itself.

It is interesting to me that most of my self-taught friends have either worked or are working through the MIT open coursework on SICP, K&R, Postgres source tours, and a variety of other fairly difficult beginner and advanced material (and remember their reference points remarkably well), while most of the CS graduates I know are more interested in just chasing whatever the latest web framework is and can’t explain what, say, the C preprocessor does. Neither group spends much time writing low-level code, but the self-educated group tends to have some understanding at that level and genuinely appreciates opportunities to learn more while many of the provably educated folks don’t know much, and don’t care to know much, about what is happening within their machines. (That said, I would relish the chance to go to back to school — but since I know I’ll never have the chance I’ve just got to read the best sources I can find and have my own insights.)

This has had a lot of different effects. In the past as a community we had a problem with the Not Invented Here syndrome (aka NIH — yes, its got its own acronym (and sometimes there are good reasons to make NIH a policy)) and sometimes deliberate reinventing of the wheel. Now we have the even worse problems of Never Heard of That Before and Let’s Jam Square Pegs Where They Don’t Belong (like, try to coerce the Web into being an applications development framework instead of being a document linking and publication service, for example).

A lot of concerns have been raised over the last few years about the direction that Unix has been headed in (or more specifically, a few very popular consumer-oriented distributions of Linux which represent the majority of Unix in desktop and tablet use today). There are issues ranging from attempts to move settings files from plain text to binary formats, efforts to make the desktop into one giant web page, efforts to make the system behave more Windows-like (give anyone the privileges to install whatever packages they want into unrestricted environments (protip: toy with the last two words here — there is a solution…)), and many other instances which scream of misinterpreting something that is near to someone’s experience (“easy”) as being less complex (“simple”). Some of these are just surface issues, others are not. But most grind against the Unix philosophy, and for good reason.

Most of these un-Unixy efforts come from the “new” class of developer. These are people who grew up on Windows and seem determined to emulate whatever they saw there, but within Unix. Often they think that the way to get a Unix to feel like Windows is to muck with the subsystems. Sometimes this is because they think that they know better, sometimes this is because they realize that the real solutions lie in making a better window manager but since that is hard subsystems are the easier route (and this feels more hackish), but most often it is simply because they don’t understand why things work they way they do and lack the experience to properly interpret what is in front of them. What results are thoughts like “Ah, I wish that as an unprivileged user I could install things via binary bundle installers, like off downloads.com in Windows, without remembering a stupid password or using some stupid package manager and get whatever I want. I can’t remember my password anyway because I have the desktop set to auto-login. That would put me in charge as a user!” Of course, they think this without ever realizing that this situation in Windows is what puts East European and Chinese government crackers in charge of Windows worldwide.

This gets down to the core of operating system maintenance, and any system administrator on any operating system knows that, but the newcomer who wants to implement this “feature” doesn’t. What they think is “Linux permissions are preventing me from doing that? Linux permissions must be wrong. Let’s just do away with that.” and they go on to write an “extension” which isn’t an extension at all, but rather a huge security flaw in the system. And they do it deliberately. When others say “that’s a bad idea” they say “prove it” and accusations of religious fundamentalism soon follow.

But there could have been a better solution here. For example, group permissions were invented just for this purpose. There is (still) a wheel group in every Linux I’ve seen. There’s even still  a sys group. But I’ve seen them actually used properly once or twice, ever — instead we have another triangular wheel which has been beaten round over the years called sudo and a whole octopus of dangly thingies called PAM and SE domains and… and… and… (do we really want one more?)

Anyway, {groups, [insert favorite permissions system]}  aren’t a perfect solution but they go a long way to doing things right in a simple manner without a lot of mucking about with subsystem changes. Back in the Old Days users had the same concerns, and these systems were thought out way back then. But people don’t go back and research this sort of thing. Learning old, good idea is hard. Not really to do, but to sit still and think long enough to understand is hard for a lot of people. There is a wealth of knowledge scattered throughout the man pages, info docs and about a bajillion websites, open source books, mailing list archives, newsgroup archives, design documents, formal treatments, O’Reilly books, etc. (what?!? books?!? How old fashioned! I’m not reading a word further!) but few people take the time to discover these resources, much less actually use them.

SELinux is another good idea someone had. But its not immediately obvious to newcomers so most folks just turn it off because that’s what someone else said to do. This is totally unnecessary but its what a lot of people do. It also gets very little development attention on Ubuntu, the most Windows-like Linux distro, because that distro has the highest percentage of uneducated ex-Windows users. You know what most exploits are written for? SELinux disabled Ubuntu boxes running a variety of closed-source software (Adobe products are pretty high on the list, but there are others) and unsecured web services (PHP + MySQL (i.e. hacked up Drupal installations) top the list, but to be fair they are the most prolific also). An example of the misconceptions rampant in the Ubuntu community is that running something in a chroot makes something “secure” because it is colloquially called a “chroot jail“. When told that chroot doesn’t really have anything to do with security and that a process can escape from a chroot environment if it wants to they get confused or, even funnier/sadder, want to argue. They can’t imagine that subsystems like mock depend on chroot for reasons other than security.

Why on earth would anyone disable a tool like SELinux if they are going to digitally whore their system out all over the internet by exposing the sensitive bits the way PHP programs do? Because they just don’t know. Before turning it off, no Apache screen. After turning it off, feathers! Before turning off SELinux and installing Flash no pr0nz on the screen just a black box that said something was broken on pornhub.com. After turning it off, tits! The immediate effect of turning it off is simple to understand; the long-term effect of turning it off is hard to understand; learning the system itself requires grokking a new concept and that’s hard. That’s why. And even better, the truly uninformed think that setenforce 0 is some slick haX0r trick because its on the command line… oooh.

So, simply put, pixels.

Pixels is the interest these days. Not performance, not sane subsystems, not security, not anything else. The the proper arrangement of pixels. Pixels can put tits on the screen, security subsystems and text configuration files can’t do that — at least, the connection between the two is impossible to manage for the average ex-Windows user.

The new users coming to Linux trying to Dozify it are doing so in the pure interest of pixels and nothing more. They don’t know much about information theory, relational data theory or any of the other things that people used to be compelled to study (“nuh uh! I learnt how to make Drupal show words on the screen, so I know about RDBMSs!”). Many mistake the information in a howto on a blog for systems knowledge, and most will never actually make the leap from knowledge to wisdom. They tinker with Linux but most of that tinkering doesn’t involve exploration as much as it involves trying to reshape it in the image of an OS they claim to be escaping. They can tinker with Linux because you just can, and you can’t with OS X or Windows.

You can make Linux your own. This is the right reason to get involved, whether your motivation is primarily pixels or whatever, any reason is a good reason to be interested in new development. But you can’t roll in assuming you know everything already.

And that’s the core problem. Folks show up in Linux land thinking they know everything, willing to break over 40 years of tremendous computing success and tradition. Some people even going so far as to arrive with prior intent to break things just for the social shock value. But ultimately its all in the interest of pixels.

But we don’t have to compromise the underlying OS and subsystems to get great desktop performance, run games, get wacky interactive features that aren’t available anywhere else, do multimedia (legally via Fluendo or via more natural means), or even just put tits on the screen. In fact all those things were possible (even easy) about a decade ago on Linux, but few people knew enough about the different components to integrate them effectively. What we need is developers who are educated enough about those separate systems to develop competently within and atop them without creating n00beriffic, monolithic junk designs that spread dependencies like cancer across the entire system.

The original triad of RPM, Yum and PackageKit was a great example of how to do it right — not perfect, but very nearly. They were linearly dependent, and the dependencies were exclusively top-down, accepting for necessary core system libraries/runtimes (the presence of Python, openssh and Bash, for example, is not an unreasonable expectation even on a pretty darn slim system).

But then someone comes along and wants to make PackageKit able to notify you with an audio alert when there is something worth updating — and instead of developing a modular, non-entangled extension that is linearly dependent on PackageKit, and not knowing well enough how to design such things nor willing to take the time to read PackageKit and grok it first, the developer decides to just “add a tiny feature to PackageKit” — which winds up making it grow what at first appears to be a single, tiny dependency: PulseAudio.

So now PackageKit depends on a whole slew of things via PulseAudio that the new feature developer didn’t realize, and over time those things grow circular dependencies which trace back to the feature in PackageKit which provided such a cute little audio notifier. This type of story gets even more fun when the system becomes so entangled that though each component comes from wildly differing projects no individual piece can be installed without all the others. At that point it matters not whether a dependency is officially up, down or sideways relative to any other piece, they all become indirectly dependent on everything else.

HAL got sort of like that, but not through external package dependencies — its dependencies got convoluted on the inside within its own code structure, which is just a different manifestation of the same brand of digital cancer. Actually, gcc is is need of some love to avoid the same fate, as is the Linux kernel itself (fortunately the corrosion of both gcc and the kernel is slower than HAL for pretty good reasons). This sort of decay is what prompts Microsoft to ditch their entire code base and start over every so often — they can’t bear to look at their own steaming pile after a while because it gets really, really hard and that means really, really expensive.

In the story about PackageKit above I’m compressing things a bit and audio alerts is not the way PackageKit got to be both such a tarbaby and grow so much hair at the same time (and it is still cleanly detachable from yum and everything below) — but it is a micro example of how this happens, and it happens everywhere that new developers write junk add-on features without realizing that they are junk. A different sort of problem crops up when people don’t realize that what they are writing isn’t the operating system but rather something that lives among it its various flora, and that it should do one thing well and that’s it.

For example I’m a huge fan of KDE — I think when configured properly it can be the ultimate desktop interface (and isn’t too shabby as a tablet one, either) — but there is no good reason that it should require execmem access. Firefox is the same way. So is Adobe Flash. None of these programs actually require access to protected memory — they can run whatever processes they need to within their own space without any issue — but they get written this way anyway and so this need is foisted on the system arbitrarily by a must-have application. Why? Because the folks writing them forgot that they aren’t writing the OS, they are writing an application that lives in a space provided by the OS, and they are being bad guests. Don’t even get me started on Chrome. (Some people read an agenda into why Flash and Chrome are the way they are — I don’t know about this, but the case is intriguing.)

Some distros are handling these changes better than others. The ones with strong guidelines like Fedora, Arch and Gentoo are faring best. The ones which are much further on the “do whatever” side are suffering a bit more in sanity. Unfortunately, though, over the last couple of years a few of the guidelines in Fedora have been changing — and sometimes not just changing a little because of votes, but changing because things like Firefox, systemd, PulseAudio, PackageKit, etc. are requiring such changes be made in order to function (they haven’t gone as far as reversing library bungling rules completely to let Chrome into the distro, but its a possibility).

To be polite, this is an interesting case of it being easier to re-write the manual than to fix the software. To be more blunt, this is a guideline reversal by fiat instead of vote. There is clear pressure from obviously well-monied quarters to push things like systemd, Gnome3, filesystem changes  and a bunch of other things that either break Fedora away from Linux or break Linux away from what Unices have always been. (To be fair, the filesystem changes are mostly an admission of how things work in practice and an opportunistic stab at cleaning up /etc. Some of the other changes are not so simply or innocently explained, however.)

This is problematic for a whole long list of technical reasons, but what is says about the business situation is a bit disconcerting: the people with the money are throwing it at people who don’t grok Unix. The worst part is that the breaking of Linux in an effort to commit such userland changes is completely unnecessary.

Aside from a very few hardware drivers, we could freeze the kernel at 2.6, freeze most of the subsystems, and focus on userland changes and produce a better result. We’re racing “forward” but I don’t see us in a fundamentally different place than we were about ten years ago on core system capabilities. This is a critical problem with a system like Windows, because customers pay through the nose for new versions that do exactly what the old stuff did. If you’re a business you have a responsibility to ask yourself what you can do today with your computers that you couldn’t do back in the 90’s. The idea here is that the OS isn’t what users are really interested in, they are interested in applications. Its harder to write cool applications without decent services being provided, but they are two distinctly different sets of functionality that do not have any business getting mixed together.

In fact, Linux has always been a cleanly layered cake and should stay that way. Linux userland lives atop all that subsystems goo. If we dig within the subsystem goo itself we find distinct layers there are well that have no business being intertwined. It is entirely possible to write a new window manager that does crazy, amazing things that were unimagined by anyone else before without touching a single line of kernel code, messing with the init system, or growing giant, sticky dependency tentacles everywhere. (Besides, any nerd knows what an abundance of tentacles leads to…)

The most alarming issue over the longer-term is that everyone is breaking Linux differently. If there was a roadmap I would understand. Sometimes its just time to say goodbye to whatever you cling to and get on the bus. But at the moment every project and every developer seems to be doing their own thing to an unprecedented degree. There has been some rumbling that a few things emanating from RH in the form of Fedora changes are deliberate breaks with Unix tradition and even the Linux standard, and that perhaps this is in an effort to deliberately engender incompatibility with other distros. That sounds silly in an open source world, but the truth of the business matter with infrastructure components is (and to be clear, platform equates to infrastructure today) that while you can’t lock out small competitors emerging or users doing what they want, without enormous funding no newcomer can make a dent in the direction of the open source ecosystem without very deep pockets.

Consider the cost of supporting just three good developers and their families for two years in a way that they feel comfortable about their career prospects after that two years. This is not a ton of money, but I don’t see a long line of people waiting to plop a few hundred thousand down on a new open source business idea until after its already been developed (the height of irony). There are a few thousand people willing to plop down a few million each on someone selling them the next already worn-out social networking scheme, though. This is because its easy to pitch a big glossy brochure of lies to suckers using buzzwords targeting an established market but difficult to pitch creation of a new market because that requires teaching a new idea; as noted above, people hate having to work to grasp new ideas.

Very few people can understand the business argument for keeping Linux as a Unixy system and how that can promote long-term stability while still achieving a distro that really can do it all — be the best server OS and maintain tight security by default while retaining the ever-critical ability to put tits on home user’s screens. Just as with developers where effort and time isn’t the problem but rather understanding, with investors the problem isn’t a lack of batteries but rather a lack of comprehension of the shape of the computing space.

Ultimately, there is no reason we have to pick between having a kickass server, a kickass desktop, a kickass tablet or a kickass phone OS, even within the same distro or family. Implementing a sound computing stack first and giving userland wizards something stable to work atop of is paramount. Breaking everything to pieces and trying to make, say, the network subsystem for “user” desktops work differently than servers or phones is beyond missing the point.

Recent business moves are reminiscent of the dark days of Unix in the 80’s and early 90’s. The lack of a direction and deliberate backbiting and sidedealing with organizations which were consciously hostile to the sector in the interest of short-term gain set back not just Unix, but serious computing on small systems for decades. This is, not to mention, that it guaranteed that the general population became acquainted with pretty shoddy systems and were wide open to deliberate miseducation about the role of computers in a work environment.

Its funny/scary to think that office workers spend more hours a day touching and interacting with computers than carpenters spend interacting with their tools, but understand their tools almost none at all whereas the carpenter holds a wealth of detailed knowledge about his field and the mechanics of it. And before you turn your pasty white, environmentally aware, vegan nose up at carpenters with the assumption that their work is simple or easy to learn, let me tell you from direct experience that it is not. “Well, a hammer is simpler than a computer and therefore easier to understand.” That is true about a hammer, but what about the job the carpenter is doing or his other tools, or more to the point, the way his various tools and skills interact to enable his job as a whole? Typing is pretty simple, too, but the scope of your job probably is not as simple as typing. Designing or even just building one house is a very complex task, and yet it is easier to find a carpenter competent at utilizing his tools to build a house than an office worker competent at utilizing his tools to build a solution within what is first and foremost an information management problem domain.

That construction crewmen with a few years on the job hold a larger store of technical knowledge to facilitate their trade than white-collar office workers with a few years on the job do to facilitate theirs is something that never seems to occur to people these days. When it does  it doesn’t occur to the average person something is seriously wrong with that situation. Nothing seems out of place whether the person perceiving this is an office worker, a carpenter or a guy working at a hot dog stand. We have just accepted as a global society that nobody other than “computer people” understands computing the same way that Medieval Europeans had just accepted that nobody other than nobility, scribes and priests could understand literacy.

It is frightening to me that a huge number of college educated developers seem to know less about how systems work than many Linux system administrators do unless we’re strictly walking Web frameworks. This equates to exactly zero durable knowledge since the current incarnation of the Web is built exclusively from flavor-of-the-week components. That’s all to the benefit of the few top players in IT and to the detriment of the user, if not actually according to a creepy plan somewhere. There probably was never a plan that was coherent and all thought up at once, of course, but things have clearly been pushed further in that direction by those in the industry who have caught on since the opportunity has presented itself. The “push” begins with encouraging shallow educational standards in fundamentally deep fields. Its sort of like like digital cancer farming.

Over in my little corner of the universe I’m trying hard to earn enough to push back against this trend, but my company is tiny at the moment and I’m sure I’ll never meet an investor (at least not until long after I really could use one). In fact, I doubt any exist who would really want to listen to a story about “infrastructure” because that admits that general computing is an appliance-like industry and not an explosive growth sector (well it is, but not in ways that are hyped just now). Besides, tech startups are soooo late 90’s.

Despite how “boring” keeping a stable system upon which to build cool stuff is, our customers love our services and they are willing to pay out the nose for custom solutions to real business problems — and this is SMBs who have never had the chance to get custom anything because they aren’t huge companies. Basically all the money that used to go to licensing now goes to things that actually save them money by reducing total human work time instead of merely relocating it from, say, a typewriter or word processor to a typewriter emulation program (like Writer or Word). This diversion of money from the same-old-crap to my company is great, but its slow going.

For starting from literally nothing (I left the Army not too long ago) this sounds like I’ve got one of those classic Good Things going.

But there is a problem looming. We’re spending all our time on custom development when we should be spending at least half of that time on cleaning up our upstream (Fedora, Vine and a smattering of specific upstream projects) to get that great benefit of having both awesome userland experiences and not squandering the last Nix left. If we can’t stick to a relatively sane computing stack a lot of things aren’t going to work out well over the long-term. Not that we or anyone else is doomed, but as a community we are certainly going to be spending a lot of time in digital hamster wheel fixing all the crap that the new generation of inexperienced developers is working overtime to break today.

As for my company, I’d like to hop off this ride. I know we’re going to have to change tack at some point because the general community is headed to stupid land as fast as it can go. The catch is, though, answering the question of whether or not I can generate enough gravity in-house to support a safe split or re-center around something different. Should I take over Vine by just hiring all the devs full-time? Revolve to HURD? Taking over Arch or Gentoo might be a bit much, but its got some smart folks who seem to grok Unix (and aren’t so big that they’ve grown the Ubuntu disease yet).  Or can I do what I really want to do: pour enough effort into sanifying Fedora and diversifying its dev community that I can use that as a direct upstream for SMB desktops without worry? (And I know this would benefit Red Hat directly, but who cares — they aren’t even looking at the market I’m in, so this doesn’t hurt anybody, least of all me. Actually, just for once we could generate the kind of synergistic relationship that open source promised in the first place. Whoa! Remember that idea?!?)

Things aren’t completely retarded yet, but they are getting that way. This is a problem deeper than a few distros getting wacky and attracting a disproportionate number of Windows refugees. It is evident in that I am having to cut my hiring requirement to “smart people who get shit done” — however I can get them. I have to train them completely in-house in the Dark Arts, usually by myself and via surrogate example, because there are simply no fresh graduates who know what I need them to know or think the way I need them to think. It is impossible to find qualified people from school these days. I’ve got a lot of work to do to make computing as sensible as it should be in 2012.

I might catch up to where I think we should be in 2012 by around 2030. Meh.

A Note on “Web Applications”

This is a subject worthy of a series of its own, but a short scolding will have to do for now…

The “Web” is not an applications environment. At all. Web security is a joke. The whole idea is a joke. That’s like trying to have “newspaper security” because the whole point of HTTP is untrusted, completely public, unthrottled publication of textual data and that’s it. Non-textual data is linked in, not even native within a document (ever heard of, say, an <img> tag or that whole series that starts <a something=””>?) and various programs that can use HTTP to read HTML and related markup can (or can’t) fetch extra stuff, but the text is the core of it. Always.

People get wrapped around the axle trying to develop applications that run within a browser and always hit funny walls when trying to deliver interactivity. We’ve got a whole constellation of backhacks to HTTP now that are used to pretend that a website can be an application — in fact at this point I think probably more time is spent working within this kludged edge case than any other specific area of computing, and that’s really saying something. It says a lot about the need for an applications protocol that really can do the things we wish the Web could do and it speaks volumes about how little computing is actually understood by the majority of people who call themselves developers today.

If “security” means being selective about who can see what, then you need to not be using the web at all. Websites are all-or-nothing, whether we delude ourselves that we can put layers of “security” backhacks over it to act against its nature or not.

If “security” means being selective about who can make changes to the data presented on a website, you need to make modifications to data by means other than web forms. And yes, I mean you should be doing the “C” “U” and “D” parts of CRUD from within a real program working over a protocol that is made for this purpose (and therefore inherently securable, not hacked up to be sort-of securable like HTTPS) and ditch your web admin interfaces*. That means the Web can be used as a presentation face for your data, as it was intended (well, dynamic pages is a bit of a hack of its own, but it doesn’t fundamentally alter the concept underlying the design of the protocols involved), and your data input and interactivity faces are just applications other than the browser. Think World of Warcraft, the game, the WoW Armory, and the WoW Forums. Very different faces to match very different use cases and security contexts, but all are connected by a unified data concept centered on character.

Piggybacking ssh is a snap and the forgotten ASN.1 protocol is actually easier to write against than JSON or XML-RPC, but you might have to write your own library handler, though this is true even for JSON as well, depending on your environment.

[*And a note here: the blog part of this site is currently running on WordPress, and that’s a conscious decision against having real security based on what I believe the use case to be and the likelihood of someone taking a hard look at screwing my site over. This approach requires moderation and a vigilant eye. My business servers don’t work this way at all and can serve pages built from DB data and serve static pages built from other data, but that’s it — there is no web interface that permits data alteration at all on those sites I consider to be actually important. Anyway, even our verbiage is screwed up these days. Think about how I just used the word “site”. (!= site server service application)]

I can hear you whining now, “But that requires writing real applications that can touch the database or file system or whatever and that’s <i>hard</i> and requires actual study! And maybe I’ll have to write an actual data model instead of rolling with whatever crap $ORM_FRAMEWORK spatters out and maybe that means I have to actually know something about databases and maybe that means I’ll discover that MySQL is not good at handling normalized data and that might lead me to use Postgres or DB2 on projects and management and marketing droids don’t understand things that aren’t extensively gushed over by the marketing flax at EC trade shows and… NOOOoooo~~~!” But I would counter that all the layers of extra bullshit that are involved in running a public-facing “Web 2.0” site today are way more complicated, convoluted and extremely removed from a sane computing stack and therefore, at this point, much more involved, costly and “hard” to develop and maintain than real applications are.

In fact, since all of your “web applications” are built on top of a few basic parts, but in between most web developers use gigantic framework sets that are a lot larger and harder to learn and way shorter-lived than the underlying stack, your job security and applications security will both improve if you actually learn the underlying SQL, protocol workings, and interfaces involved in the utility constellation that comprises your “application” instead of just leaving everything up to the mystery of whatever today’s version of Rails, Dreamweaver or $PRODUCT is doing to/with your ideas.

I’m not bashing all frameworks, some of them are helpful and anything that speeds proper development is probably a good thing (and “proper” is a deliberately huge and gray area here — which is why development is still more art than science and probably always will be). I am, however, bashing the ideas that:

  1. Tthe web is an applications development environment
  2. Powertools can be anything other than necessarily complex

At this point the mastery burden for web development tools in most cases outstrips the mastery burden for developing normal applications. This means they defeat their own purpose, but people just don’t know that because so few have actually developed a real application or two and discovered how that works. Instead most have spent lots of time reading up on this or that web framework and showing off websites to friends and marks telling them they’re working on a “new app”. Web apps feel easier for reasons of familiarity, not because they are actually less complex.

Hmm… this is a good sentence. Meditate on the last sentence in the above paragraph.

The dangerously seductive thing about web development is that the web is very good at giving quick, shallow results measurable in pixels. The “shallow and quick” part being the dangerous bit and the “pixels” being the seductive bit. There is a reason that folks like Fred Brooks has insinuated that the drive to “just show pixels” is the bane of good programming and also called pixels and pretty screens “chicken lipstick”. Probably the biggest challenge to professional software development is the fact that if you’re really developing original works screens are usually the last thing you have to show, after a huge amount of the work is already done. Sure, you can show how the program logic works and you can trace through routines in an interpreter if you’re using a script language somewhere in the mix like Python or Scheme, but those sorts of concrete progress from a programmer’s perspective don’t connect to most clients’ concerns or imaginations. And so we have this drive to show pixels for everything.

Fortunately I’m the boss where I work so I can get around this to some degree, but its extremely easy to see how this drive to show pixels starts dicking things up right from the beginning of a project. Its very tempting to tell a client “yeah, give me a week and I’ll show you what things should look like”. Because that’s easy. Its a completely baseless lie, but its easy. As a marketer, project manager, or anyone else who contacts clients directly, it is important to learn how to manage expectations while still keeping the client excited. A lot of that has to do with being blunt and honest about everything, and also explaining at least a little of how things actually work. That way instead of saying “I’ll show you how it will look” you can say “I’ll show you a cut-down of how it will work” and then go write a script that does basically what your client needs. You can train them to look for the right things from an interpreter and showing them a working model instead of just screens (which are completely functionless for all they know) as a developer you get a chance to do two awesome things at once: prototype and discover where your surprise problems are going to probably come from early on, and if you work hard enough at it write your “one to throw away” right out the gate at customer sale prototyping time.

But web screens are still seductive, the only thing most people read books about these days (if they read at all) and the mid-term future of general computing looks every bit as stupid, shallow and rat-race driven as the Web itself.

I can’t wait to destroy all this.

Haha, just kidding… but seriously…