Category Archives: Science & Tech

Everything from lampooning Popular Science to picking on Microsoft and Oracle belong here.

Racing to remove the last Nix

This post was prompted by a discussion on ScientificLinuxForum. The subject of this post diverts significantly from the original discussion, so I’ve placed it here instead. The thread was initially about the release of RHEL 6.3, but discussions there have a tendency to wander, particularly since many are worried we are in the last days of free computing with the advent of UEFI lock-down, DRM-Everything and new laws which prevent the digital equivalent of changing your own oil, but this post just doesn’t belong in the thread and may be of interest to a more general audience.

Unix philosophy is shifting. We can see it everywhere. Not too long ago on a Fedora development list an exchange equivalent to:

“I wanna do X, Y, and Z this new way I just made up and everyone says its bad. Why?”
“It breaks with everything Unix has done for 40 years that is known to work.”
“I don’t care what Unix has done. I want to make it work this way instead.”
“Its insecure.”
“ummm… oh…”
“Besides, it introduces binary settings so the user can’t adjust and fix them manually if the system goes to poo. So users can’t write scripts to change their settings without going through an API you’ve yet to even consider writing causing more work for everyone, and at the same time security is going to suffer. Besides, telling someone to reinstall their OS because one file got corrupted is not acceptable by our way of thinking.”
“uhhhhh… oooh?”

Let me be clear, there is the world of Unixy operating systems, there is the Unix design philosophy, and then there is the Unix religion. Part of the fun in a flame war is detailing how your opponent is a proponent of whatever part of the spectrum would most undermine their position at the time (usually the religion accusation is thrown, unless someone makes a dive straight to Nazism). The problem with dividing the world of grown-up operating systems into three stripes that way, though, is that it misses why a religion evolved in the first place.

Religion is all about belief, in particular a belief in what is safe and reliable. If I don’t offend God I’m more likely to get into Heaven — that’s safe and reliable. If I don’t give every process the ability to write arbitrarily then I’m less likely to have problems — that’s safe and reliable. Whatever God is up to I’m not really sure, he hasn’t let me in on it all, but that restrictions to write access prevent things like a rogue process (malicious, buggy or deliciously buggy) from DoS’ing the system by filling it up with garbage is something I can understand.

But not everyone can understand that, just like I can’t understand God. That’s why we have guidelines. er, religions. The fun part about the Unix religion is that its got a cultish flair, but the most functional part about it is that its effects can be measured and generally proved (heuristically or logically if not formally) to be better or worse for system performance and service provision.

It is good to question “why” and be a rebel every so often, but you’ve got to have a point to your asking and you’ve got to be prepared to hear things you may not have expected — like the response “Its insecure” which may be followed by an ego-demolishing demonstration. But people don’t like having their egos demolished and they certainly hate studying up on new-things-that-are-actually-old and yet still adore the question “why” because it sounds so revolutionary and forward-thinking.

But IT people are educated, right? They are good at dealing with detailed situations and evaluating courses of action before committing to this or that plan, right? Its all about design, right?

I’m here to tell you that we’ve got problems.

We are absorbing, over time, less talented and grossly inexperienced developers across all of techdom. It started with tossing C in favor of Java, and now even that in favor of Ruby in some places because its like “easier Java… and… hey, Rails!”. (This isn’t to say that Ruby is a bad language, but certainly that it shouldn’t be the only one you know or even the first one you learn.) Almost no universities treat hard math or electrical engineering courses as a prerequisite for computer science any more. In fact, the whole concept of putting hard classes first to wash out the stupid or unmotivated has nearly evaporated. This is not just in CS courses, but the dive has been particularly steep there. These days, as ironic as it may seem, the average programmer coming from school knows next to nothing about what is happening within the actual machine whereas a hobbyist or engineer coming from another field who is fascinated with machine computation understands quite a bit about such things.

Part of it probably has a lot to do with motivation. A large percentage of university students are on a conscious quest for paper, not knowledge, and want to get rich by copying what is now an old idea. That is, they all dream of building the next Facebook (sorry, can’t happen, Facebook will version up; at best you might get hired by them, loser). On the other hand every hobbyist or out-field engineer who spends personal time studying sticky problems in computer science on their own time is genuinely interested in the discipline itself.

It is interesting to me that most of my self-taught friends have either worked or are working through the MIT open coursework on SICP, K&R, Postgres source tours, and a variety of other fairly difficult beginner and advanced material (and remember their reference points remarkably well), while most of the CS graduates I know are more interested in just chasing whatever the latest web framework is and can’t explain what, say, the C preprocessor does. Neither group spends much time writing low-level code, but the self-educated group tends to have some understanding at that level and genuinely appreciates opportunities to learn more while many of the provably educated folks don’t know much, and don’t care to know much, about what is happening within their machines. (That said, I would relish the chance to go to back to school — but since I know I’ll never have the chance I’ve just got to read the best sources I can find and have my own insights.)

This has had a lot of different effects. In the past as a community we had a problem with the Not Invented Here syndrome (aka NIH — yes, its got its own acronym (and sometimes there are good reasons to make NIH a policy)) and sometimes deliberate reinventing of the wheel. Now we have the even worse problems of Never Heard of That Before and Let’s Jam Square Pegs Where They Don’t Belong (like, try to coerce the Web into being an applications development framework instead of being a document linking and publication service, for example).

A lot of concerns have been raised over the last few years about the direction that Unix has been headed in (or more specifically, a few very popular consumer-oriented distributions of Linux which represent the majority of Unix in desktop and tablet use today). There are issues ranging from attempts to move settings files from plain text to binary formats, efforts to make the desktop into one giant web page, efforts to make the system behave more Windows-like (give anyone the privileges to install whatever packages they want into unrestricted environments (protip: toy with the last two words here — there is a solution…)), and many other instances which scream of misinterpreting something that is near to someone’s experience (“easy”) as being less complex (“simple”). Some of these are just surface issues, others are not. But most grind against the Unix philosophy, and for good reason.

Most of these un-Unixy efforts come from the “new” class of developer. These are people who grew up on Windows and seem determined to emulate whatever they saw there, but within Unix. Often they think that the way to get a Unix to feel like Windows is to muck with the subsystems. Sometimes this is because they think that they know better, sometimes this is because they realize that the real solutions lie in making a better window manager but since that is hard subsystems are the easier route (and this feels more hackish), but most often it is simply because they don’t understand why things work they way they do and lack the experience to properly interpret what is in front of them. What results are thoughts like “Ah, I wish that as an unprivileged user I could install things via binary bundle installers, like off downloads.com in Windows, without remembering a stupid password or using some stupid package manager and get whatever I want. I can’t remember my password anyway because I have the desktop set to auto-login. That would put me in charge as a user!” Of course, they think this without ever realizing that this situation in Windows is what puts East European and Chinese government crackers in charge of Windows worldwide.

This gets down to the core of operating system maintenance, and any system administrator on any operating system knows that, but the newcomer who wants to implement this “feature” doesn’t. What they think is “Linux permissions are preventing me from doing that? Linux permissions must be wrong. Let’s just do away with that.” and they go on to write an “extension” which isn’t an extension at all, but rather a huge security flaw in the system. And they do it deliberately. When others say “that’s a bad idea” they say “prove it” and accusations of religious fundamentalism soon follow.

But there could have been a better solution here. For example, group permissions were invented just for this purpose. There is (still) a wheel group in every Linux I’ve seen. There’s even still  a sys group. But I’ve seen them actually used properly once or twice, ever — instead we have another triangular wheel which has been beaten round over the years called sudo and a whole octopus of dangly thingies called PAM and SE domains and… and… and… (do we really want one more?)

Anyway, {groups, [insert favorite permissions system]}  aren’t a perfect solution but they go a long way to doing things right in a simple manner without a lot of mucking about with subsystem changes. Back in the Old Days users had the same concerns, and these systems were thought out way back then. But people don’t go back and research this sort of thing. Learning old, good idea is hard. Not really to do, but to sit still and think long enough to understand is hard for a lot of people. There is a wealth of knowledge scattered throughout the man pages, info docs and about a bajillion websites, open source books, mailing list archives, newsgroup archives, design documents, formal treatments, O’Reilly books, etc. (what?!? books?!? How old fashioned! I’m not reading a word further!) but few people take the time to discover these resources, much less actually use them.

SELinux is another good idea someone had. But its not immediately obvious to newcomers so most folks just turn it off because that’s what someone else said to do. This is totally unnecessary but its what a lot of people do. It also gets very little development attention on Ubuntu, the most Windows-like Linux distro, because that distro has the highest percentage of uneducated ex-Windows users. You know what most exploits are written for? SELinux disabled Ubuntu boxes running a variety of closed-source software (Adobe products are pretty high on the list, but there are others) and unsecured web services (PHP + MySQL (i.e. hacked up Drupal installations) top the list, but to be fair they are the most prolific also). An example of the misconceptions rampant in the Ubuntu community is that running something in a chroot makes something “secure” because it is colloquially called a “chroot jail“. When told that chroot doesn’t really have anything to do with security and that a process can escape from a chroot environment if it wants to they get confused or, even funnier/sadder, want to argue. They can’t imagine that subsystems like mock depend on chroot for reasons other than security.

Why on earth would anyone disable a tool like SELinux if they are going to digitally whore their system out all over the internet by exposing the sensitive bits the way PHP programs do? Because they just don’t know. Before turning it off, no Apache screen. After turning it off, feathers! Before turning off SELinux and installing Flash no pr0nz on the screen just a black box that said something was broken on pornhub.com. After turning it off, tits! The immediate effect of turning it off is simple to understand; the long-term effect of turning it off is hard to understand; learning the system itself requires grokking a new concept and that’s hard. That’s why. And even better, the truly uninformed think that setenforce 0 is some slick haX0r trick because its on the command line… oooh.

So, simply put, pixels.

Pixels is the interest these days. Not performance, not sane subsystems, not security, not anything else. The the proper arrangement of pixels. Pixels can put tits on the screen, security subsystems and text configuration files can’t do that — at least, the connection between the two is impossible to manage for the average ex-Windows user.

The new users coming to Linux trying to Dozify it are doing so in the pure interest of pixels and nothing more. They don’t know much about information theory, relational data theory or any of the other things that people used to be compelled to study (“nuh uh! I learnt how to make Drupal show words on the screen, so I know about RDBMSs!”). Many mistake the information in a howto on a blog for systems knowledge, and most will never actually make the leap from knowledge to wisdom. They tinker with Linux but most of that tinkering doesn’t involve exploration as much as it involves trying to reshape it in the image of an OS they claim to be escaping. They can tinker with Linux because you just can, and you can’t with OS X or Windows.

You can make Linux your own. This is the right reason to get involved, whether your motivation is primarily pixels or whatever, any reason is a good reason to be interested in new development. But you can’t roll in assuming you know everything already.

And that’s the core problem. Folks show up in Linux land thinking they know everything, willing to break over 40 years of tremendous computing success and tradition. Some people even going so far as to arrive with prior intent to break things just for the social shock value. But ultimately its all in the interest of pixels.

But we don’t have to compromise the underlying OS and subsystems to get great desktop performance, run games, get wacky interactive features that aren’t available anywhere else, do multimedia (legally via Fluendo or via more natural means), or even just put tits on the screen. In fact all those things were possible (even easy) about a decade ago on Linux, but few people knew enough about the different components to integrate them effectively. What we need is developers who are educated enough about those separate systems to develop competently within and atop them without creating n00beriffic, monolithic junk designs that spread dependencies like cancer across the entire system.

The original triad of RPM, Yum and PackageKit was a great example of how to do it right — not perfect, but very nearly. They were linearly dependent, and the dependencies were exclusively top-down, accepting for necessary core system libraries/runtimes (the presence of Python, openssh and Bash, for example, is not an unreasonable expectation even on a pretty darn slim system).

But then someone comes along and wants to make PackageKit able to notify you with an audio alert when there is something worth updating — and instead of developing a modular, non-entangled extension that is linearly dependent on PackageKit, and not knowing well enough how to design such things nor willing to take the time to read PackageKit and grok it first, the developer decides to just “add a tiny feature to PackageKit” — which winds up making it grow what at first appears to be a single, tiny dependency: PulseAudio.

So now PackageKit depends on a whole slew of things via PulseAudio that the new feature developer didn’t realize, and over time those things grow circular dependencies which trace back to the feature in PackageKit which provided such a cute little audio notifier. This type of story gets even more fun when the system becomes so entangled that though each component comes from wildly differing projects no individual piece can be installed without all the others. At that point it matters not whether a dependency is officially up, down or sideways relative to any other piece, they all become indirectly dependent on everything else.

HAL got sort of like that, but not through external package dependencies — its dependencies got convoluted on the inside within its own code structure, which is just a different manifestation of the same brand of digital cancer. Actually, gcc is is need of some love to avoid the same fate, as is the Linux kernel itself (fortunately the corrosion of both gcc and the kernel is slower than HAL for pretty good reasons). This sort of decay is what prompts Microsoft to ditch their entire code base and start over every so often — they can’t bear to look at their own steaming pile after a while because it gets really, really hard and that means really, really expensive.

In the story about PackageKit above I’m compressing things a bit and audio alerts is not the way PackageKit got to be both such a tarbaby and grow so much hair at the same time (and it is still cleanly detachable from yum and everything below) — but it is a micro example of how this happens, and it happens everywhere that new developers write junk add-on features without realizing that they are junk. A different sort of problem crops up when people don’t realize that what they are writing isn’t the operating system but rather something that lives among it its various flora, and that it should do one thing well and that’s it.

For example I’m a huge fan of KDE — I think when configured properly it can be the ultimate desktop interface (and isn’t too shabby as a tablet one, either) — but there is no good reason that it should require execmem access. Firefox is the same way. So is Adobe Flash. None of these programs actually require access to protected memory — they can run whatever processes they need to within their own space without any issue — but they get written this way anyway and so this need is foisted on the system arbitrarily by a must-have application. Why? Because the folks writing them forgot that they aren’t writing the OS, they are writing an application that lives in a space provided by the OS, and they are being bad guests. Don’t even get me started on Chrome. (Some people read an agenda into why Flash and Chrome are the way they are — I don’t know about this, but the case is intriguing.)

Some distros are handling these changes better than others. The ones with strong guidelines like Fedora, Arch and Gentoo are faring best. The ones which are much further on the “do whatever” side are suffering a bit more in sanity. Unfortunately, though, over the last couple of years a few of the guidelines in Fedora have been changing — and sometimes not just changing a little because of votes, but changing because things like Firefox, systemd, PulseAudio, PackageKit, etc. are requiring such changes be made in order to function (they haven’t gone as far as reversing library bungling rules completely to let Chrome into the distro, but its a possibility).

To be polite, this is an interesting case of it being easier to re-write the manual than to fix the software. To be more blunt, this is a guideline reversal by fiat instead of vote. There is clear pressure from obviously well-monied quarters to push things like systemd, Gnome3, filesystem changes  and a bunch of other things that either break Fedora away from Linux or break Linux away from what Unices have always been. (To be fair, the filesystem changes are mostly an admission of how things work in practice and an opportunistic stab at cleaning up /etc. Some of the other changes are not so simply or innocently explained, however.)

This is problematic for a whole long list of technical reasons, but what is says about the business situation is a bit disconcerting: the people with the money are throwing it at people who don’t grok Unix. The worst part is that the breaking of Linux in an effort to commit such userland changes is completely unnecessary.

Aside from a very few hardware drivers, we could freeze the kernel at 2.6, freeze most of the subsystems, and focus on userland changes and produce a better result. We’re racing “forward” but I don’t see us in a fundamentally different place than we were about ten years ago on core system capabilities. This is a critical problem with a system like Windows, because customers pay through the nose for new versions that do exactly what the old stuff did. If you’re a business you have a responsibility to ask yourself what you can do today with your computers that you couldn’t do back in the 90’s. The idea here is that the OS isn’t what users are really interested in, they are interested in applications. Its harder to write cool applications without decent services being provided, but they are two distinctly different sets of functionality that do not have any business getting mixed together.

In fact, Linux has always been a cleanly layered cake and should stay that way. Linux userland lives atop all that subsystems goo. If we dig within the subsystem goo itself we find distinct layers there are well that have no business being intertwined. It is entirely possible to write a new window manager that does crazy, amazing things that were unimagined by anyone else before without touching a single line of kernel code, messing with the init system, or growing giant, sticky dependency tentacles everywhere. (Besides, any nerd knows what an abundance of tentacles leads to…)

The most alarming issue over the longer-term is that everyone is breaking Linux differently. If there was a roadmap I would understand. Sometimes its just time to say goodbye to whatever you cling to and get on the bus. But at the moment every project and every developer seems to be doing their own thing to an unprecedented degree. There has been some rumbling that a few things emanating from RH in the form of Fedora changes are deliberate breaks with Unix tradition and even the Linux standard, and that perhaps this is in an effort to deliberately engender incompatibility with other distros. That sounds silly in an open source world, but the truth of the business matter with infrastructure components is (and to be clear, platform equates to infrastructure today) that while you can’t lock out small competitors emerging or users doing what they want, without enormous funding no newcomer can make a dent in the direction of the open source ecosystem without very deep pockets.

Consider the cost of supporting just three good developers and their families for two years in a way that they feel comfortable about their career prospects after that two years. This is not a ton of money, but I don’t see a long line of people waiting to plop a few hundred thousand down on a new open source business idea until after its already been developed (the height of irony). There are a few thousand people willing to plop down a few million each on someone selling them the next already worn-out social networking scheme, though. This is because its easy to pitch a big glossy brochure of lies to suckers using buzzwords targeting an established market but difficult to pitch creation of a new market because that requires teaching a new idea; as noted above, people hate having to work to grasp new ideas.

Very few people can understand the business argument for keeping Linux as a Unixy system and how that can promote long-term stability while still achieving a distro that really can do it all — be the best server OS and maintain tight security by default while retaining the ever-critical ability to put tits on home user’s screens. Just as with developers where effort and time isn’t the problem but rather understanding, with investors the problem isn’t a lack of batteries but rather a lack of comprehension of the shape of the computing space.

Ultimately, there is no reason we have to pick between having a kickass server, a kickass desktop, a kickass tablet or a kickass phone OS, even within the same distro or family. Implementing a sound computing stack first and giving userland wizards something stable to work atop of is paramount. Breaking everything to pieces and trying to make, say, the network subsystem for “user” desktops work differently than servers or phones is beyond missing the point.

Recent business moves are reminiscent of the dark days of Unix in the 80’s and early 90’s. The lack of a direction and deliberate backbiting and sidedealing with organizations which were consciously hostile to the sector in the interest of short-term gain set back not just Unix, but serious computing on small systems for decades. This is, not to mention, that it guaranteed that the general population became acquainted with pretty shoddy systems and were wide open to deliberate miseducation about the role of computers in a work environment.

Its funny/scary to think that office workers spend more hours a day touching and interacting with computers than carpenters spend interacting with their tools, but understand their tools almost none at all whereas the carpenter holds a wealth of detailed knowledge about his field and the mechanics of it. And before you turn your pasty white, environmentally aware, vegan nose up at carpenters with the assumption that their work is simple or easy to learn, let me tell you from direct experience that it is not. “Well, a hammer is simpler than a computer and therefore easier to understand.” That is true about a hammer, but what about the job the carpenter is doing or his other tools, or more to the point, the way his various tools and skills interact to enable his job as a whole? Typing is pretty simple, too, but the scope of your job probably is not as simple as typing. Designing or even just building one house is a very complex task, and yet it is easier to find a carpenter competent at utilizing his tools to build a house than an office worker competent at utilizing his tools to build a solution within what is first and foremost an information management problem domain.

That construction crewmen with a few years on the job hold a larger store of technical knowledge to facilitate their trade than white-collar office workers with a few years on the job do to facilitate theirs is something that never seems to occur to people these days. When it does  it doesn’t occur to the average person something is seriously wrong with that situation. Nothing seems out of place whether the person perceiving this is an office worker, a carpenter or a guy working at a hot dog stand. We have just accepted as a global society that nobody other than “computer people” understands computing the same way that Medieval Europeans had just accepted that nobody other than nobility, scribes and priests could understand literacy.

It is frightening to me that a huge number of college educated developers seem to know less about how systems work than many Linux system administrators do unless we’re strictly walking Web frameworks. This equates to exactly zero durable knowledge since the current incarnation of the Web is built exclusively from flavor-of-the-week components. That’s all to the benefit of the few top players in IT and to the detriment of the user, if not actually according to a creepy plan somewhere. There probably was never a plan that was coherent and all thought up at once, of course, but things have clearly been pushed further in that direction by those in the industry who have caught on since the opportunity has presented itself. The “push” begins with encouraging shallow educational standards in fundamentally deep fields. Its sort of like like digital cancer farming.

Over in my little corner of the universe I’m trying hard to earn enough to push back against this trend, but my company is tiny at the moment and I’m sure I’ll never meet an investor (at least not until long after I really could use one). In fact, I doubt any exist who would really want to listen to a story about “infrastructure” because that admits that general computing is an appliance-like industry and not an explosive growth sector (well it is, but not in ways that are hyped just now). Besides, tech startups are soooo late 90’s.

Despite how “boring” keeping a stable system upon which to build cool stuff is, our customers love our services and they are willing to pay out the nose for custom solutions to real business problems — and this is SMBs who have never had the chance to get custom anything because they aren’t huge companies. Basically all the money that used to go to licensing now goes to things that actually save them money by reducing total human work time instead of merely relocating it from, say, a typewriter or word processor to a typewriter emulation program (like Writer or Word). This diversion of money from the same-old-crap to my company is great, but its slow going.

For starting from literally nothing (I left the Army not too long ago) this sounds like I’ve got one of those classic Good Things going.

But there is a problem looming. We’re spending all our time on custom development when we should be spending at least half of that time on cleaning up our upstream (Fedora, Vine and a smattering of specific upstream projects) to get that great benefit of having both awesome userland experiences and not squandering the last Nix left. If we can’t stick to a relatively sane computing stack a lot of things aren’t going to work out well over the long-term. Not that we or anyone else is doomed, but as a community we are certainly going to be spending a lot of time in digital hamster wheel fixing all the crap that the new generation of inexperienced developers is working overtime to break today.

As for my company, I’d like to hop off this ride. I know we’re going to have to change tack at some point because the general community is headed to stupid land as fast as it can go. The catch is, though, answering the question of whether or not I can generate enough gravity in-house to support a safe split or re-center around something different. Should I take over Vine by just hiring all the devs full-time? Revolve to HURD? Taking over Arch or Gentoo might be a bit much, but its got some smart folks who seem to grok Unix (and aren’t so big that they’ve grown the Ubuntu disease yet).  Or can I do what I really want to do: pour enough effort into sanifying Fedora and diversifying its dev community that I can use that as a direct upstream for SMB desktops without worry? (And I know this would benefit Red Hat directly, but who cares — they aren’t even looking at the market I’m in, so this doesn’t hurt anybody, least of all me. Actually, just for once we could generate the kind of synergistic relationship that open source promised in the first place. Whoa! Remember that idea?!?)

Things aren’t completely retarded yet, but they are getting that way. This is a problem deeper than a few distros getting wacky and attracting a disproportionate number of Windows refugees. It is evident in that I am having to cut my hiring requirement to “smart people who get shit done” — however I can get them. I have to train them completely in-house in the Dark Arts, usually by myself and via surrogate example, because there are simply no fresh graduates who know what I need them to know or think the way I need them to think. It is impossible to find qualified people from school these days. I’ve got a lot of work to do to make computing as sensible as it should be in 2012.

I might catch up to where I think we should be in 2012 by around 2030. Meh.

Gravity: Not what it does, what it causes

In addition to the LibreCAD thing, OS support stuff, etc. I’m also working on an ERP solution for my clients. This solution has an enormous number of obvious advantages over the way they are using software right now, but it requires me as an individual to understand how their business works better than any individual in that company does (or at least it seems that way after talking with all the different section leaders over there). My thinking about their problems and how to model them accurately in an ERP system leads me back to the problems that can be solved in my own company by a similar system, which leads me to the idea of generalization of functions and rules. This is, of course, the goal of good software design, but without spending some time reflecting on the nature of problems, the nature of data, and the nature of computing it is impossible to identify the parts that can be correctly be said to be general to all problems of a similar type, and what elements remain that make the specific problem at hand unique and identify it as that specific problem and not the same problem also found somewhere else.

This is, in a sense, what must be done when designing general functions, or correct object design, or deciding what utilities and handy tools should be considered to be “system utilities” and what other are just niche applications or personal tools. The concept of classification implies description, and at a certain level specifying a problem implies the ready resolution of the same problem (pretty neat!). But many times we get the identification of the problem wrong. More correctly, we inadequetely or incorrectly specify a problem and then develop whatever solution naturally follows from this mistaken version of the problem as we (wrongly) understand it.

As I was driving home in the rain today I was thinking about this — both the nature of the specific problems my ERP system needs to solve for the customer and the nature of problem classification itself. This led to a thought on how the precise, yet incorrect understanding of problems can lead to silly things like the widely misquoted statement “mathematics/physics states that bees can’t fly.” But quite clearly they do — which means neither mathematics nor physics is what says bees can’t fly, but rather an inaccurate mathematical model of flight predicts that bees can’t fly. But the misquote above is the more popular concept (its more fun, anyway, because it leaves the door open to magical thinking and the world of foolish mystery). The problem with this thinking is not just that it misleads people into thinking that math and physics don’t work — it also personifies math and physics (as in, creates the idea that “they” are beings of some sort who would attempt to prevent bees from flying as if the “can’t” in the misquote relates to the idea of permission) in a way that is weird and leads to more wrong thinking later. That idea led me down a mental journey into physics and I recalled an article I read recently about M-theory, gravity and General Relativity — and, specifically, the parts in the article that relate to the idea that gravity might be repulsive at great distances.

So… Gravity, at great distances, is it repulsive? Does this make sense? Or is there, perhaps, a misconception of the problem space here? There quite definitely is a miconception of the problem — that is evident in our inability to describe gravity in a mathematically consistent way that reconciles relativity with quantum physics. But what sort of misconception? I’m not part of the physics community, but from the way articles written for the layman put things (which is highly suspect) it seems as though people are personifying gravity a bit much. In other words, they are looking for what gravity “does” and from that trying to derive an accurate model of how gravity does that instead of thinking about what gravitiy “is” and then following the path of consequences to its existence.

The four basic forces (weak atomic, strong atomic, electromagnetic and gravity) are fairly well established. Interactions of things (space/matter/energy) those forces have to explain all phenomena — and so far pretty much do, which indicates that this is likely a correctish concept of the way things are. There doesn’t seem to be room for a fifth basic force, though there may be room for more things or types of things with which they might interact or ways they might interact (that is, unthought of dimensions, unobserved particles, etc, but not new forces themselves).

So… Gravity. It a sense it is a side effect of what happens when you concentrate mass in a single place. We say it “curves” space, though the way I tend to picture this in my mind is more of compression that bending, because bending can only happen to things that are truly unbounded, and space seems to be bounded by itself. The most common demonstration is to take a taught, suspended sheet and place something heavy on it, and then say “like this, it bends the surface down” and then the path of a marble on the sheet when rolled across tends towards the heavy thing. But this is a massive oversimplification.

If we take the suspended sheet as a 2D object then the downward direction that it bends to when something is placed on it represents a third dimension for that thing to bend “to” — hence it is bendable because it is unbounded in a new direction. The situation with space and gravity doesn’t seem to be the same because while we are fairly certain there are far more than 3 simple dimensions, we’re not being told to imagine that space itself bends in a 4th extra direction due to the influence of gravity/the presence of mass.

Another problem is the reason for the bending. Space is being directly influenced by the presence of matter via gravity, wheras the sheet is being influenced by something pressing on it. In other words, to get something to bend in an extra direction/new dimension it must be pushed, not contracted. So space under the influence of gravity behaves more the way that a wet cotton sheet contracts towards a spot that warm, dry air is applied to while the wet remainder stays lose and stretched out than the way that a sheet with something heavy on gets forced down in a single spot by the heavy thing.

And another problem with this sheet example is the rolling of the marble in an attempt to explain how things get drawn toward “gravity wells” in the same way the marble gets drawn to the lower points of the sheet. In the case of gravity the path of something under the influence of inertia is to continue moving in a straight line. But the straightness of that line is through space and gravity has contracted space into a smaller area than it normally would have been (or at least it appears so) and so the “straight” line is now curved relative to things that aren’t as local to the mass as the moving thing is. With the sheet example the path of the marble is actually longer than the original path, so this is a mis-example.

So this explanation and concepts derived from it are wrong. Now let’s return to the 2D sheet, because the number of dimensions really isn’t important here. If we were to draw a straight grid on it (or just a bunch of uniformly even or uniformly random dots), get it wet and then apply a hairdryer to a single part of it, we would start to see a subtle warping of the lines on the sheet, though over the whole sheet the size and general shape of things would remain the same. Now if we traced a line from one side to the other we would continue on that line just fine, but our path would bend toward the point we applied the hairdryer (interestingly, using a bounded space/area the path bends, but the medium itself does not, it just contracts in an area).

A more extreme example (and the one that came to mine while driving) was the shrink wrap we used to use when I was a teenager working at a video store. We would put items for sale into a polymer bag, and then blow hot air on the bag to make it shrink down. Being michievious kids we would sometimes experiment on down times with the stuff, and found that you could really make some weird things happen by blasting select spots of a large, flat sheet of the wrap material when spread out against the wall or floor. We were forcing local contractions on a self-bound 2D plane when we did this on material that was stretched out flat.

What does this have to do with gravity and localized attraction vs distant repulsion? Everything. If we blow hot air at points opposite one another on the same stretched out sheet the wrap material in between the two sheets get stretched tigher. Anything point that is closer to one point than another is pulled away from the center and toward the opposite point — relatively speaking this means that a point that is distant enough from one spot is traveling away from it. And this happens despite the fact that our actual influence on the sheet is constrictive in nature — all pull and no push. If space behaves in anything approaching this, then gravity can easily have a secondary effect of causing points beyond a certain distance from one another to grow further apart and yet not have any properties of repulsion at all. This increasing distance of points beyond a certain distance also does not require that the sheet continues to expand overall, either. That the universe itself likely is expanding just confuses the issue a bit more.

To a tiny observer on a whirling rock out in deep cold space this effect could definitely look forbiddingly like an unfriendly “push” effect to gravity. If that observer were prone to personify elements of existence (in other words, assign blame for effects on them) then it would be very natural to blame the observed possible increasing rate of expansion of the universe on a property of gravity rather than on an indirect effect or condition that it causes. One effect per force makes more sense than having a magical force that somehow exhibits one behavior in one place and yet another in another place.

Of course, the mental idea above that space doesn’t “bend” is going to probably bother people who carry with them a mental model of space as a bendy thing, and of blackholes as places where space “folds back on itself” when contraction is really the issue. The mental picture of a black hole just gets all screwy then — but this is probably more correct, not less. Anyway, with teeny tiny dimensions apparently being all over yet so small in scale, yet so universal because they represent entire dimensional planes that have been prevented from much direct interaction with our normal Euclidian(ish) selves, it seems likely that perhaps the folded up space stuff that makes up matter and energy might just be manifestations of tiny black holes compressed in directions that are just not a part of our 3 spacial dimensions, and all those black holes bubbling in and out of existence could have a lot to do with the basics of why/how all subatomics are unstable, yet predictably so. But that is a whole different discussion.

I am completely unqualified to make any statements about physics anyway, but these were the thoughts that went through my mind as I drove home in the rain. Unfortunately I’ll probably never have the time to really study physics, so the common crap written in the press for the layman (this includes most “science magazines” as well) are all I’ll likely ever get a chance to read and be mislead into dumb mental models like the ones above by.

Google’s Hearing and Insertions

Google’s CEO testified before Congress the other day during an antitrust hearing. The basic issue is whether Google is attempting to use its de facto monopoly on search to develop or even in some cases force a monopoly on other services which are not stated anywhere in their charter. The monpoly on earch is legal. Nobody was ever forced to use Google for searching, and until very recently there weren’t any decent alternatives anyway. Providing a great service and capturing a huge customer base is perfectly legal. The issue here is whether Google is using its search monopoly as a gateway to pitching its own services in other areas to generate monopolies over general data services and thereby extend its monopoly to everything.


[Google obviously does plug its own services as if they were search results — and plugging the Chrome browser is one of the most important things the company could do to exert direct control over what information users see and use over the longer term.]

This would not be legal for a few reasons — one of which is that Google would be able to grant itself an unfair advantage. Hordes of unsavvy internet users who don’t know much about how computers or the internet work would never be able to find things without Google because very often in the minds of millions of lay users Google search equates to the gateway to the internet, and things they click on from the main Google search page are, in their mind, already linked to Google. So Google favoring its own services in search equates to users simply never learning about anything other than Google services. The problem with this is that as hordes of lay users gravitate to one or another online services the network effect comes into play, making which ever service takes an early lead overwhelmingly more important in the market than any other. Evidence of this is everywhere, and for good or ill, the fact is that most data service markets predict a monopoly almost out of necessity.

Google is obviously aware of this, and so are consumer protection groups. The creepy thing about this is that Google is not just offering search and online services, it is trying to offer everything online. Including all of your data. So under the Google model (actually, under all cloud models — which are all dangerously stupid) every bit of your computing data — personal photos, music files, blog posts, document files for work, document files for not work… everything — would be hosted on Google servers and saved on Google Inc.’s hard disks and nothing would be stored on your own disk. In other words, nothing would in any practical way be your own property because you won’t have any actual control over anything. And heaven forbid that an earthquake knocks your internet service out or anything else happens that disconnects you from the internet.

If one can’t see the danger here, one simply doesn’t have their thinking cap on. Anyway, this being a dangerously stupid way to handle your personal data is beside the point — the majority of internet users do not understand the issues well enough to know that its not a good idea to not manage their own data storage. But then again, most people don’t even recognize that their entire existence is merely a specific reordering of pre-existing matter, and therefore by definition simply a specific set of data. The information a person generates or intersects with in their life is the sum total of what they are — and this, of course, goes quite beyond being important somewhere on the web and as technology advances over the next few decades will increase in importance as the very nature of who and what we are increasingly mingles with automated data processes.

This is the real goal — extend monopoly to information as a general concept, and thereby generate a monopoly on modern existence (and I’m not simply talking about some ephemeral concept of what it is to be “modern” — in concrete terms we really are just masses of information). If there ever was a brilliant business plan, this is it. And it is a bit scary to think things might go that way. Google’s “Don’t be evil” theme is just words — as I have written elsewhere on this blog about how geopolitics works, power is about capability not about intent. Muslims may adhere to a religion based entirely on absolute social and political dominance of the planet, but being incapable of actually achieving it makes them a geopolitical nuisance over history instead of the driving force of history. On the other hand America’s intention is absolutely not to actually colonize and take over the world, but the fact that it is actually capable of doing this makes lots of people (even some Americans) panic and/or kick and scream about what they perceive as “American Imperialism” even though this is in no way the actual case.

So what about Google? That Google actually is developing the situation to make a drive at information monopoly is one thing. Their intent to not be evil is merely an intent. The capability expressed by a realized information monopoly would be of much more importance to the 1st world than even an American capability to successfully invade Skandinavia, for example, and is therefore something that should be guarded against.

Gradkell Systems: Not assholes afterall

I was contacted yesterday (or was it two days ago? I’ve since flown across the international date line, so I’m a bit confused on time at the moment) by the product manager for DBsign, the program that is used for user authentication and signatures on DTS (and other applications? unsure about this at the moment). He was concerned about two things: the inaccurate way in which I described the software and its uses, and the negative tone in which I wrote about his product. It was difficult to discern whether he was upset more about me making technically inaccurate statements or my use of the phrase “DBSign sucks”.

Most of the time when someone says something silly or out of turn on the intertubes it is done for teh lulz. Responding in anger is never a good move when that is the case (actually being angry about anything on the internet is usually a bad move, one which usually precipitates a series of bad judgement calls and massive drama). Mike Prevost, the DBsign Product Manager for Gradkell Systems, not only knows this well, he did something unusual and good: he explained his frustration with what I wrote in a reasonable way and then went through my article line-by-convoluted-line and offered explanations and corrections. He even went further than that and gave me, an obscure internet personality, his contact information so I can give him a call to clear up my misconceptions and offer recommendations. Wow.

That is the smartest thing I’ve seen a software manager do in response to negative internet publicity — and I have quite a history with negative internet publicity (but in other, admittedly less wholesome places than this). So now I feel compelled not only to offer a public apology for writing technically inaccurate comments, I am going to take Mr. Prevost’s offer, learn a bit more about DBsign (obviously nobody is more equipped to explain it to me than he is), and write about that as well.

The most interesting thing here is not the software, though — it is the wetware. I am thoroughly impressed by the way he’s handling something which obviously upsets him and want to ask him about what motivated his method of response. When I say “obviously upsets” I don’t mean that his email let on that he’s upset directly — he was quite professional throughout. Rather, I know how it feels to have been deeply involved in a knowledge-based product and have someone talk negatively about it out of turn (actually, it can frustrating to have someone speak positively out of turn in the same way). I’ve developed everything from intelligence operations plans to strategic analysis products to software myself and I know that one of the most important aspects of any knowledge worker’s world is his pride and personal involvement with his work. This is a very personal subject. Just look at the way flamewars get out of hand so fast on development mailinglists. I still have epic flamewar logs kept since the very early days of Linux kernel development, Postfix dev mayhem and even flamewars surrounding the Renegade BBS project. While the decision to use a comma (or a colon, or whatever) as a delimiter in an obscure configuration file may seem like a small point to an outsider, to the person who spent days ploughing over the pros and cons of such a decision or the people who will be enabled or constrained in future development efforts by such a decision it is very personal indeed.

Unfortunately this week has me travelling around the globe — twice. Because of that I just don’t have time to call Mr. Prevost up yet, or make major edits to anything I’ve got posted, but I’m going on record right now and saying three things:

  1. I should have personally checked what the DTMO help desk (clearly a dubious source of technical information) told me about how DBsign works and what the hangups in interoperation with current open source platforms are. I’m sorry about that and I likely cast DBsign in the wrong light because of this.
  2. Gradkell Systems are not a bunch of assholes — quite the opposite, it seems. Their openness is as appreciated as it is fascinating/encouraging.
  3. DBsign might not suck afterall. Hopefully I’ll learn things that will completely reverse my position on that — if not, Mr. Prevost seems open to recommendations.

I've been turned into a mudkip. Nice move.
So yes, I’ve been turned into a mudkip.

The part in point 3 above about Mr. Prevost being open to recommendations, when fully contemplated, means something special (and I’ve had a 16 hour flight and two days in airports to think about this): Great managers of shitty software projects will eventually be managers of great software projects; whether because they move on to other projects that are great, or because they take enough pride in their work to evolve a once bad project into a great one.

Cloning: Not a viable business model

There has been a bit of talk over the last few decades about cloning technologies and the idea that we are technically capable of human cloning at the present time. One way of generating public interest in the mass media when there isn’t much to talk about is to resort to the scary-technology-future schtick. While the achievement of human cloning is noteworthy from a technical standpoint, visions of a eutopian/nightmare scenario in which vast numbers of human clones are produced to further a societal, military or economic end are simply not based in reality.

Humans have evolved the unique ability to adapt our environments to ourselves. This is the opposite of what other organisms have evolved to be capable of. That capability is built on the back of a number of significant human traits. To name a few: opposable thumbs, high intelligence, conscious imagination, multiple-layered memory, the ability to codify and classify our imaginings, complex inter-organism communications, high-order emotional responses and memory, and the critical ability to self-organize into super-organisms. It reads a bit like a product feature list, or more interestingly, a list of Unix-style package dependencies. There is no single trait which can grant the ability to do what humans spend most of their time doing, and there is no magic formula which can accurately model and explain human behavior.

The evolutionary pressures necessary to produce humanity in its present form are varied, complex and largely unknowable at the present time. That humans have ultimately come out of the process is nothing short of miraculous — at least by our present understanding. (On the other hand, strict observation of the anthropic principle forces us to abandon the notion that what has happened on Earth could not have happened elsewhere — and carrying this to a logical conclusion, if the universe is in fact infinite (or, stated another way, if the multiverse is infinitely multifaceted), then it must have occured somewhere else any number of times. Whether the universe/multiverse/innerverse/whatever-verse is infinite is, of course, a subject of debate.)

Cloning, in essence, locks in whatever changes have occured in the target organism indefinitely. This sets the cloned product outside of the world of evolutionary pressure and places it directly into the world of pure economic product — which is subject to the forces of supply and demand. At the present time people enjoy reading emotionally charged imaginings about mass clone scenarios, and yet the same people enjoy reading emotionally charged imaginings about the supposed over population of the Earth — in both cases produced and marketed by the same media organizations (whose business is marketing their product, not understanding applied technology).

If the world is overpopulated then we have no need for clones, because the expense of cloning will not provide a benefit any greater than that of recruiting existing humans who were produced at no burden to whoever the employer is in the scenario. Leaving the burden of (re)production, rearing, education, etc. on a family model (be it nuclear, polygamist, polyamorous, broken home, hooker bastard spawn, whatever) provides available humans at an enormous discount compared to any commercial cloning operation and is therefore the correct market option. This leaves the only commercial viable cloning options to be niche in nature at best. Rich men who really want to buy exactly 5 copies of their favorite shower girl may provide a tiny market of this nature, but there is no guarantee that all five clones will agree with whatever the job at hand winds up being, that the purchaser will be alive and remain interested in the project long enough to see it come to fruition (over a decade), or that the nature of the market will not change enormously before completion. (The ready availability of multiple-birth natural clones (twins, triplets, etc.) has not produced a similar market in any case outside of a very small niche in adult services, and that market already apears to be saturated. It turns out that variety tends to be the greatest male aphrodesiac anyway.)

So this leaves what? Very little market for one of the few proposed uses of clones.

The military has no use for clones over what use it already gains from mass human screenings of naturally evolved humans who do not come with the large overhead of a human cloning program attached. The idea that the military wants identical soldiers is flawed to begin with, however. The U.S. Army has a deep recognition of the benefits of having a hugely diverse fighting force and would not want to sacrfice those advantages in exchange for another budgetary drain the institutional burden of becoming Dad for a large number of clones — who may decide that they have better things to do than serve Washington once they have all the big guns anyway. War is a highly emotional experience and the support provided by soldiers between soldiers and the culture that has evolved within the military because of this is almost as complex to understand as the phenomenon of humanity to begin with. Trying to successfully replicate or replace such a complex system that already exists, works well and is free with one which does not yet exist and might fail at enormous cost would be a very difficult thing to pitch to taxpayers.

One again, this leaves very little potential market where the imagination has a fun time seeing one.

The only viable cloning market for the forseeable future would be in organ production and harvesting. There are a few reasons in this market why human clones will never be viable products as well, however. Once again, the expense and time required to clone a human is already equal to the human who is in need of a bio replacement in the first place, the primary difference between the clone and the natural human being that the existing human would already be rich and well enfranchised to be in a position to order a clone from which to harvest his needed spare parts (and the clone, obviously, would not). This conjures up images of a really fun movie from a few years ago, “The Island”, which told the story of two clones produced for the purposes of organ replacement suddenly realizing what they are and deciding that such a short future wasn’t really for them. But that is the movies. Back in the world of reality we already have the technology to clone human organs, and these organ clones do not require fully human hosts. It is possible to grow a human ear on the back of a lab rat, a human heart inside of a pig, and likely other parts on other hosts which are faster and far cheaper to maintain and harvest than human clones would be.

Once again, no market here, either.

Medical testing is another area where I’ve heard talk of mass human cloning. Perfect test subjects, so some claim. But these are only perfect test subjects on the surface. Identical people are not perfect test subjects in the slightest when it comes to medical testing. The most important aspect of drug, allergy, ergonomics, althetic tolerance, etc. medical testing is the statistical significance of the test group. The word “group” here is everything. Testing clones would merely provide the same person for testing a number of times, which amounts to just testing the same person ad nauseam at enormous expense for no gain. Humanity is a varied and evolving thing, and medical advancements must take that into account or else those advancements themselves become useless and thereby unmarketable.

Sorry sci-fi fans, no market here, either.

For the same reasons that medical testing on clones is useless so is an entire society created from clones. A clone society is instantly susceptible to lethal mass epedemics from every vector. It is very likely that a flu that kills one person would kill them all, whereas natural humanity tends to be largely resistant to every pathogen in nature (and even engineered ones) when taken as a whole. Though humans may suffer to vary degrees independently of one another due to individual variations, those individual variations when combined and spread across the masses of humanity provide an overwhelmingly powerful insurance against the mass extinction of humanity. A cloned society removes this ultimate protection at its root and leaves the population totally naked as a whole. Contemplating these realities means contemplating one’s own mortality and relative insignificance, and I imagine that is a likely reason why people don’t think about such things when they see scary stories on TV or the internet about future dystopic scenarios of a planned Earth-wide all-clone society (a la some Illuminati conspiracy variants).

So all-clone society? Just not workable. Not just economically unviable, a downright unsafe way to try to manage humanity.

So why all the fuss? Mainly because there is a big market in generating public drama around new technologies which the general public does not yet fully understand or have any practical contact with (yet). The technologies required to achieve a human clone are significant, but they will manifest themselves in the market (and already do) in very different ways than current popular media proposes.

Sometimes stereotypes turn up in strange places

The other day an unusually perfect message title drifted through the “Fedora Women” mailing list. It was good enough that I felt compelled to screenshot it for posterity — and today I remembered to share it with the world:

womenapi.png

Whatever the odds of the elements of this subject line coming together to form that particular combination, it was sweet poetic justice. (I mean, we don’t have a “Fedora Men” mailing list… or maybe that is all the other ones? Sort of like not having a “white history month” in school.)

Sticking your CAC in the Pooter for Uncle Sugar

So I finally broke down and started writing tutorials about how to use your DoD CAC in conjunction with Linux and Mac OS X (and other Unixes as I get more test systems assembled…). Since Fedora 13 pretty much took the cake for this year’s kickass Linux distro I wrote instructions for 32-bit Fedora 13 first. Next up will be 32-bit Ubuntu 10.4 LTS, then 64-bit Fedora 13, 32-bit Ubuntu 10.10, 64-bit Ubuntu (probably 10.4 LTS first), Fedora 14, and Mac OS X somewhere in there as soon as I get my hands on a test system.

The main guide portal page can be found here: http://zxq9.com/dodcac/

It turns out that a huge number of people in the military have been waiting to get above the Windows scramble and move on to Linux or Mac OS X. The awareness of Unix-type systems in this generation is pretty amazing considering recent history (it is equally amazing that almost nobody knows what BSD is anymore). The one thing holding them back is an unfounded fear of not being able to access DoD web apps such as DTS, AKO/DKO and RMT. Another thing they fear is losing the ability to play DVDs on their computers because they have heard the evil (and tragi-comic) rumors that playing DVDs on Linux is hard to do and makes your palms hairy. (Of course, they could always dual-install… and doing it with a new harddrive is so easy my tech-uninterested wife can do it.)

I cover all of that in the tutorials and its pretty easy. If I got paid to maintain this stuff by DoD then I would go as far as writing GUI Python scripts to make the installations cake for everyone the way Anonymouse used to. But alas I spend an inordinate amount of time doing this and its all for free — and the solutions are half-way to the level of user-friendliness they could be. Actually, that I don’t get paid for this and it is a concrete service realized by many servicemembers sort of pisses me off when we have literally millions a year getting pissed away on bad projects all over the place. If DoD would consider the utility of standing up a development house of, say, 10 top-level open source developers (the sort who can demand low-six-figure salaries) and a person who can bridge the gap between combat operations and military experience and the open source world (hint: this would be someone just like me…) they could safely switch most of their infrastructure and save roughly $15,000 per seat (this figure comes from my signal officer’s quote for how much it costs us to put a single computer on the network) in recurring site licenses, security and maintenance across the force.

(Where I work right now there are about 300 computers deployed on the NIPR. Just switching that single building over would pay for three times the development group I am discussing, so fix-figures for no-shit developers is actually extremely cheap and you could get the right people, not the inept folks who bumbled through development of crap like DTS and said they had a product worth releasing…)

The fact that the MPAA and RIAA have so much political clout is something I would ordinarily have blogged about by now. I have not… yet. Instead of writing yet another rant-on-the-web-about-the-media-industry and thereby merely regurgitating all of the great points both personal and legal that have been better stated elsewhere, I think it would be more interesting and productive to abstain (though ranting about it is tempting) and instead examine the fundamental trends which will eventually render all such efforts at controlling individual and independent mathematical achievements impossible and unenforceable in the future.

There are some great points to be made and some incredible busines opportunities emerging as the nature of the world changes and art, math, social interaction, thought and even evolution (in some senses) become digitized, mathematical processes. Give some thought to this. Depending on where your social and/or religious emotional investments lay this is very exciting, frightening, unstoppable or something which must be fought. Whoever though math machines could be so controversial?

Internet Censorship and Social Circumvention

A long time ago John Gilmore (same link at Ask.com, in case Wikipedia gets defaced again) observed that the internet tends to perceive censorship as network damage and routes around it. An incident in China this week has proven that not only is that the case, but it also appears that under certain social conditions the internet not only perceived censorship as damages and finds ways to circumvent it, but that official censorship can easily cause the censored data to proliferate at a far greater rate than if left alone.

roflmao.jpg

The even which brought this to my attention was the internet rumor in China that Zhou Xiaochuan (the Governor of the Central Bank of China — a fairly high ranking governing party position) had charges brought against him for losing $430 billion in Chinese government money on U.S. Treasury Bill investments, had disappeared and was making an effort to defect to the United States. There are, of course, many levels at which the details of this story do not make sense, beginning with the idea that a $430 billion loss on U.S. T-bills was even possible, but the idea that a leading party figure may have gotten on the wrong side of the controlling party in China and was making an attempt to flee to the West is not completely without precedent and not outside the realm of solid possibility.

Where the rumor started is a little difficult to track down, but that doesn’t really matter. It was discussed all over the social networking and alternative media (Chinese blogs, etc.) as well as brought up in web chat rooms and in-game chat all over the country. The government tried to shut the discussion down and the attempt failed miserably despite the state directed deletion of web pages, official censorship instruction to media outlets and search filters which blocked or blanked web searches related to Zhou’s name, the Central Bank of China, etc. All of this was futile, of course, as censoring team chat inside of an application such as World of Warcraft would be extremely difficult for anyone to pull off effectively and censorship of phone and text conversations related to such a limited topic without a blanket disruption of all data service is impossible.

So… the rumor grew. And it grew on the back of the fact of censorship itself. The censorship was giving fuel to the fire. A common conversation between conspiracy minded internet users goes something like:

“Hey, I just heard Zhou Xiaochuan got in trouble for losing a bunch of state money and defected to the U.S.! He’s disappeared and now the government is trying hard to shut the story down. Tell your friends!”
“I don’t beleive that, that’s gotta be bullshit. He’s a bank governor, why would he defect?”
“You think it’s bullshit? Try searching for his name anywhere on the net. It’s all blanked or blocked. You can’t even get a return for a search on him from financial websites.”
…does a search… “Whoa! Something must be going down, because you’re right, I can’t search for him anywhere!”

And thus the Internet Hate Machine drives on and smacks the face of the state censors who are inadvertently acting to proliferate information on the social layer by trying to censor it at the technical layer.

This sort of phenomenon is having and already has had a huge effect on the way information relays and proliferation play out across the world. Free societies have largely already explored the new dynamics and have adapted — which was an easy thing for, say, the U.S. to do as the government there doesn’t make much of an effort to censor anything, ever. But in societies such as China and Russia where inforamtion censorship is a key element of social control, and social control is something that their existing political power structure absolutely must maintain to effectively run the state, the effects are going to become increasindly interesting and unpredictable as economic, social and military stresses increase over the next few years.

The issue I wanted to discuss is the idea that censorship can backfire, particularly in an environment which has tuned itself to expect a high level of governmental and institutional interference with the free exchange of information and ideas. As a side note it is interesting to learn that the U.S. government refuted the rumor itself today though an official refutation by the Chinese government has not yet been made. Could he have defected? Maybe. A single person defecting, though this person is a significant player in the Chinese government, is not as important as the overall dynamic of how internet rumors can undermine any information control scheme and that any efforts at control have a high chance of backfiring and expanding the influence and distribution of the rumor.

AIDS Research Declining: Perspective of a Former AIDSVAX Investor

An almost comical article was released by the AFP today trumpeting the first human clinical trials of an African-developed AIDS vaccine. While on the surface this certainly sounds great and hopeful, the fact is the vaccine trial is not only likely of very little statistical significance (the pool of patients is only 48 people) the real research money — and therefore the real brains — for AIDS prevention research is in other areas.

But why would AIDS research money be in any area other than vaccines these days? Just a few years ago several now-forgotten subsidiaries of the most respected pharmaceutical companies were hard at work trying to develop a vaccine for AIDS. This was natural as every human in the world was a potential (and almost certain) customer, so even a very cheap vaccine would see at least 6 billion units sold as quickly as they could be produced. This is not even counting the market position one would have for the duration of the patent’s life, as the vaccine would almost certainly be a worldwide child vaccination requirement.

The fact that the vaccines never made it to market (and most never even to trials) and the striking reality that nearly all of these companies or subsidiaries no longer sponsor AIDS vaccine research or in some cases even exist is a testament both to the difficulty of this sort of research and the negative effects of intellectual property threats from a huge number of sources.

The basic problem with medical research is that it simply is not free. A common misconception in the pharmaceutically un-invested public is that pharmaceuticals are produced by companies which are dark, evil and seek to control life, death and the money involved with those two. People further assume that somehow the sort of extremely difficult and exhaustive research required to develop truly innovative and life-saving drugs and techniques is not worth the enormous effort (represented by money) required for such research and that companies have no right to recoup the billions they spend annually on such research by charging market prices for drugs.

The drug research industry has seen a huge contraction in recent years, particularly in areas such as AIDS prevention research and drugs simply because they are afraid of investing the time and money required to produce a stable product only to have their intellectual rights trampled and product stolen.

But that’s ridiculous!” was the first response I got to this. It is not. Consider that every populist government on the planet and nearly every left-leaning political party or private organization has plainly stated that any technical knowledge which has the potential to reduce or eradicate AIDS will and must be appropriated in the public interest. No compensation is mentioned here and none is intended. The image of drug companies being only after money (as if that were somehow a crime and against the public interest) and therefore evil greatly assists this assertion and has, indeed, protected such policies and the men who promote them from any backlash. They have, in fact, usurped the moral high-ground and made their intended theft appear moral — and amazingly made working hard and spending money to eradicate AIDS with the expectation of being compensated for this effort appear evil. Amazing, isn’t it?

French and Canadian health consortia have both stated that they will strike the intellectual property rights of whatever company first successfully develops an AIDS vaccine within their jurisdictions. Under their proposed programs government-subsidized generic drug makers are the ones who will provide the “public service” of producing at-cost generic AIDS vaccinations for everyone. This sentiment sounds great to anyone not actually involved in trying to find an AIDS vaccine… or to anyone who lacks an understanding of how all the medical miracles we take for granted today have come into being (not to mention the mountain of other miracle gadgets that make modern life what it is… from elevators to airplanes).

I personally was heavily invested in more than one company trying to develop an AIDS vaccine back in the days when that was a popular and forward thinking thing to do. I invested money not simply because I want to see AIDS done away with (I enjoy philandering enough to have a personal interest in seeing this disease wiped out, after all) but most importantly because I want to see a decent return on my investment capital.

In the end, I have the intellectual and operational capacity as an individual to avoid contracting AIDS under nearly all circumstances, so I am much less worried about contracting AIDS personally than I am getting a decent return on my money. I am not unique in this regard. Saving the world simply doesn’t make you any money. I tried it for years, risking my own life in the process, and you just walk away with divorces under your belt, kids who don’t know you and a home country that “respects” you from afar but doesn’t understand or care to know you as a person anymore. However, investing money in things that are inherently useful (and therefore worth money) is something that is easy to believe in, no matter how cynical the world has made you, and the pinnacle of functionality for humans is something that has the potential to save their very lives from something like AIDS.

But the problem with such a thing is that everybody who doesn’t have anything to do with the effort wants it, and not just wants it bad enough to pay for it (which is your whole angle as an investor) but wants it bad enough to steal it. Enough of them want to steal it that they will vote together to make the process of stealing it legal. So in the end you can invest billions in an AIDS vaccine and the only thing that will ever come of it is for people to not thank you and repay you, but to steal the product of your long labor in a flurry of moral self-certainty and self-righteously call you an “evil pharmaceutical profiteer”. Some way to thank the group who worked so long an hard to save the world from AIDS.

Where is the fun in that? Lose my investment while the bleeding hearts pat themselves on the back for what amounts to intellectual property theft leaving all those who worked hard on the project to wonder what happened and why they are suddenly unable to sit back and enjoy the fruits of their labor instead of looking for jobs or investment opportunities in another sector. After all, once research is proven to be unprofitable, does anyone imagine smart money would continue to fund smart researchers only to repeat the painful experience of being legally robbed? Research is a business and it takes huge sums of money to pay huge teams of talented researchers who can demand appropriately huge salaries for committing years of their lives to this extremely difficult and deep research. Researchers are easy to come by, but motivated, insightful, good researchers of the caliber a private concern are willing to pay top money for are frighteningly rare.

So… to bring a rambling article to its focus: What happened to all those very promising trial vaccines and the companies that were producing them? They all shut down. Funding was withdrawn, people were let go, the information collected across thousands of man-years of research was recorded, sealed and secured probably forever, never to see the light of day. The research is simply too controversial. It appears that nobody is ever going to let an investor or company make a dime off of an AIDS vaccine, at least not while it is still a political topic instead of an actual disease that actually infects Real People(TM) in the minds of billions of people around the world. That means all the money will move into other, less controversial areas of research or different sectors of industry entirely and AIDS vaccines will continue to be a largely neglected area of research.

But what about government grants? Those exist, sure, but they provide a mere fraction of what is necessary for research at this level with any speed. There is not a war against AIDS and AIDS is not threatening the national security of a country such as the US which actually has the means to do something about it. So it will fall to the side in favor of more pressing issues such as people who kill citizens by the thousands with airplanes or other political hot button topics such as making sure that Planet English only produces literature using feminine or gender-neutral pronouns (even when it doesn’t make sense) or global warming (which are far less controversial on the surface, despite being based on far shakier science than AIDS research).

As discussed above researchers are easy to come by — the dirtbag, non-productive type, I mean. The sort of researcher who is content to subsist on government grants which require no real way of quantifying, qualifying or substantiating their research for funding justification (which is what the government grant game is all about) are not the same sort of top-notch engineers and researchers hired by companies who have the private investment capital to pay bigger salaries for bigger brains. Research, just like making drugs, is a business after all, and nobody goes to MIT or interns at the Mayo Clinic to end their life poor, merely happy with the “difference” they made on a crap government salary.

The South African trial in most likely will fail, but the failure being based on an extremely limited group (most trials based on prevention, not treatment, utilize a pool of thousands, not tens, for very good reasons) will be easy enough to publicly misinterpret long enough to attract some unwise investors into impulsively tossing their money away at this company in time for the company to close its doors and stop operating at a realized profit — and yes, halting operations after absorbing free bags of stupid money (as opposed to smart money mentioned above) is a business model, though it’s a swindle, not a productive interprise.

This certainly appears to be a stunt that the media is trumpeting out of sheer hope, not based on concrete and promising data. I hope that AIDS gets eradicated; further, I hope that I can profit from that eradication. I’m happy either way, but something nobody is going to stand by (at least not me) for is to see AIDS get eradicated and the people responsible for the work behind it to get nothing but a smirk, a smile, or robbed in return.