Monthly Archives: April 2011

Gradkell Systems: Not assholes afterall

I was contacted yesterday (or was it two days ago? I’ve since flown across the international date line, so I’m a bit confused on time at the moment) by the product manager for DBsign, the program that is used for user authentication and signatures on DTS (and other applications? unsure about this at the moment). He was concerned about two things: the inaccurate way in which I described the software and its uses, and the negative tone in which I wrote about his product. It was difficult to discern whether he was upset more about me making technically inaccurate statements or my use of the phrase “DBSign sucks”.

Most of the time when someone says something silly or out of turn on the intertubes it is done for teh lulz. Responding in anger is never a good move when that is the case (actually being angry about anything on the internet is usually a bad move, one which usually precipitates a series of bad judgement calls and massive drama). Mike Prevost, the DBsign Product Manager for Gradkell Systems, not only knows this well, he did something unusual and good: he explained his frustration with what I wrote in a reasonable way and then went through my article line-by-convoluted-line and offered explanations and corrections. He even went further than that and gave me, an obscure internet personality, his contact information so I can give him a call to clear up my misconceptions and offer recommendations. Wow.

That is the smartest thing I’ve seen a software manager do in response to negative internet publicity — and I have quite a history with negative internet publicity (but in other, admittedly less wholesome places than this). So now I feel compelled not only to offer a public apology for writing technically inaccurate comments, I am going to take Mr. Prevost’s offer, learn a bit more about DBsign (obviously nobody is more equipped to explain it to me than he is), and write about that as well.

The most interesting thing here is not the software, though — it is the wetware. I am thoroughly impressed by the way he’s handling something which obviously upsets him and want to ask him about what motivated his method of response. When I say “obviously upsets” I don’t mean that his email let on that he’s upset directly — he was quite professional throughout. Rather, I know how it feels to have been deeply involved in a knowledge-based product and have someone talk negatively about it out of turn (actually, it can frustrating to have someone speak positively out of turn in the same way). I’ve developed everything from intelligence operations plans to strategic analysis products to software myself and I know that one of the most important aspects of any knowledge worker’s world is his pride and personal involvement with his work. This is a very personal subject. Just look at the way flamewars get out of hand so fast on development mailinglists. I still have epic flamewar logs kept since the very early days of Linux kernel development, Postfix dev mayhem and even flamewars surrounding the Renegade BBS project. While the decision to use a comma (or a colon, or whatever) as a delimiter in an obscure configuration file may seem like a small point to an outsider, to the person who spent days ploughing over the pros and cons of such a decision or the people who will be enabled or constrained in future development efforts by such a decision it is very personal indeed.

Unfortunately this week has me travelling around the globe — twice. Because of that I just don’t have time to call Mr. Prevost up yet, or make major edits to anything I’ve got posted, but I’m going on record right now and saying three things:

  1. I should have personally checked what the DTMO help desk (clearly a dubious source of technical information) told me about how DBsign works and what the hangups in interoperation with current open source platforms are. I’m sorry about that and I likely cast DBsign in the wrong light because of this.
  2. Gradkell Systems are not a bunch of assholes — quite the opposite, it seems. Their openness is as appreciated as it is fascinating/encouraging.
  3. DBsign might not suck afterall. Hopefully I’ll learn things that will completely reverse my position on that — if not, Mr. Prevost seems open to recommendations.

I've been turned into a mudkip. Nice move.
So yes, I’ve been turned into a mudkip.

The part in point 3 above about Mr. Prevost being open to recommendations, when fully contemplated, means something special (and I’ve had a 16 hour flight and two days in airports to think about this): Great managers of shitty software projects will eventually be managers of great software projects; whether because they move on to other projects that are great, or because they take enough pride in their work to evolve a once bad project into a great one.

Immaculate Intervention: The Wars of Humanitarianism

Another excellent examination of practical reality by George Friedman. The essay below places the political realities of humanitarian intervention in direct contrast with the facts of military action. Such examinations are an exercise political activists and the general public (and, incidentally, Freshman Everything Barak Obama), who live in a world filled with great imaginings about How The Greater Good Should Get Done yet are unfamiliar with geopolitics and and lack practical experience with how war is actually waged, would benefit greatly from.

There are wars in pursuit of interest. In these wars, nations pursue economic or strategic ends to protect the nation or expand its power. There are also wars of ideology, designed to spread some idea of “the good,” whether this good is religious or secular. The two obviously can be intertwined, such that a war designed to spread an ideology also strengthens the interests of the nation spreading the ideology.

Since World War II, a new class of war has emerged that we might call humanitarian wars — wars in which the combatants claim to be fighting neither for their national interest nor to impose any ideology, but rather to prevent inordinate human suffering. In Kosovo and now in Libya, this has been defined as stopping a government from committing mass murder. But it is not confined to that. In the 1990s, the U.S. intervention in Somalia was intended to alleviate a famine while the invasion of Haiti was designed to remove a corrupt and oppressive regime causing grievous suffering.

It is important to distinguish these interventions from peacekeeping missions. In a peacekeeping mission, third-party forces are sent to oversee some agreement reached by combatants. Peacekeeping operations are not conducted to impose a settlement by force of arms; rather, they are conducted to oversee a settlement by a neutral force. In the event the agreement collapses and war resumes, the peacekeepers either withdraw or take cover. They are soldiers, but they are not there to fight beyond protecting themselves.

Concept vs. Practice

In humanitarian wars, the intervention is designed both to be neutral and to protect potential victims on one side. It is at this point that the concept and practice of a humanitarian war become more complex. There is an ideology undergirding humanitarian wars, one derived from both the U.N. Charter and from the lessons drawn from the Holocaust, genocide in Rwanda, Bosnia and a range of other circumstances where large-scale slaughter — crimes against humanity — took place. That no one intervened to prevent or stop these atrocities was seen as a moral failure. According to this ideology, the international community has an obligation to prevent such slaughter.

This ideology must, of course, confront other principles of the U.N. Charter, such as the right of nations to self-determination. In international wars, where the aggressor is trying to both kill large numbers of civilians and destroy the enemy’s right to national self-determination, this does not pose a significant intellectual problem. In internal unrest and civil war, however, the challenge of the intervention is to protect human rights without undermining national sovereignty or the right of national self-determination.

The doctrine becomes less coherent in a civil war in which one side is winning and promising to slaughter its enemies, Libya being the obvious example. Those intervening can claim to be carrying out a neutral humanitarian action, but in reality, they are intervening on one side’s behalf. If the intervention is successful — as it likely will be given that interventions are invariably by powerful countries against weaker ones — the practical result is to turn the victims into victors. By doing that, the humanitarian warriors are doing more than simply protecting the weak. They are also defining a nation’s history.

There is thus a deep tension between the principle of national self-determination and the obligation to intervene to prevent slaughter. Consider a case such as Sudan, where it can be argued that the regime is guilty of crimes against humanity but also represents the will of the majority of the people in terms of its religious and political program. It can be argued reasonably that a people who would support such a regime have lost the right to national self-determination, and that it is proper that a regime be imposed on it from the outside. But that is rarely the argument made in favor of humanitarian intervention. I call humanitarian wars immaculate intervention, because most advocates want to see the outcome limited to preventing war crimes, not extended to include regime change or the imposition of alien values. They want a war of immaculate intentions surgically limited to a singular end without other consequences. And this is where the doctrine of humanitarian war unravels.

Regardless of intention, any intervention favors the weaker side. If the side were not weak, it would not be facing mass murder; it could protect itself. Given that the intervention must be military, there must be an enemy. Wars by military forces are fought against enemies, not for abstract concepts. The enemy will always be the stronger side. The question is why that side is stronger. Frequently, this is because a great many people in the country, most likely a majority, support that side. Therefore, a humanitarian war designed to prevent the slaughter of the minority must many times undermine the will of the majority. Thus, the intervention may begin with limited goals but almost immediately becomes an attack on what was, up to that point, the legitimate government of a country.

A Slow Escalation

The solution is to intervene gently. In the case of Libya, this began with a no-fly zone that no reasonable person expected to have any significant impact. It proceeded to airstrikes against Gadhafi’s forces, which continued to hold their own against these strikes. It now has been followed by the dispatching of Royal Marines, whose mission is unclear, but whose normal duties are fighting wars. What we are seeing in Libya is a classic slow escalation motivated by two factors. The first is the hope that the leader of the country responsible for the bloodshed will capitulate. The second is a genuine reluctance of intervening nations to spend excessive wealth or blood on a project they view in effect as charitable. Both of these need to be examined.

The expectation of capitulation in the case of Libya is made unlikely by another aspect of humanitarian war fighting, namely the International Criminal Court (ICC). Modeled in principle on the Nuremberg trials and the International Criminal Tribunal for the former Yugoslavia, the ICC is intended to try war criminals. Trying to induce Moammar Gadhafi to leave Libya knowing that what awaits him is trial and the certain equivalent of a life sentence will not work. Others in his regime would not resign for the same reason. When his foreign minister appeared to defect to London, the demand for his trial over Lockerbie and other affairs was immediate. Nothing could have strengthened Gadhafi’s position more. His regime is filled with people guilty of the most heinous crimes. There is no clear mechanism for a plea bargain guaranteeing their immunity. While a logical extension of humanitarian warfare — having intervened against atrocities, the perpetrators ought to be brought to justice — the effect is a prolongation of the war. The example of Slobodan Milosevic of Yugoslavia, who ended the Kosovo War with what he thought was a promise that he would not be prosecuted, undoubtedly is on Gadhafi’s mind.

But the war is also prolonged by the unwillingness of the intervening forces to inflict civilian casualties. This is reasonable, given that their motivation is to prevent civilian casualties. But the result is that instead of a swift and direct invasion designed to crush the regime in the shortest amount of time, the regime remains intact and civilians and others continue to die. This is not simply a matter of moral squeamishness. It also reflects the fact that the nations involved are unwilling — and frequently blocked by political opposition at home — from the commitment of massive and overwhelming force. The application of minimal and insufficient force, combined with the unwillingness of people like Gadhafi and his equally guilty supporters to face The Hague, creates the framework for a long and inconclusive war in which the intervention in favor of humanitarian considerations turns into an intervention in a civil war on the side that opposes the regime.

This, then, turns into the problem that the virtue of the weaker side may consist only of its weakness. In other words, strengthened by foreign intervention that clears their way to power, they might well turn out just as brutal as the regime they were fighting. It should be remembered that many of Libya’s opposition leaders are former senior officials of the Gadhafi government. They did not survive as long as they did in that regime without having themselves committed crimes, and without being prepared to commit more.

In that case, the intervention — less and less immaculate — becomes an exercise in nation-building. Having destroyed the Gadhafi government and created a vacuum in Libya and being unwilling to hand power to Gadhafi’s former aides and now enemies, the intervention — now turning into an occupation— must now invent a new government. An invented government is rarely welcome, as the United States discovered in Iraq. At least some of the people resent being occupied regardless of the occupier’s original intentions, leading to insurgency. At some point, the interveners have the choice of walking away and leaving chaos, as the United States did in Somalia, or staying for a long time and fighting, as they did in Iraq.

Iraq is an interesting example. The United States posed a series of justifications for its invasion of Iraq, including simply that Saddam Hussein was an amoral monster who had killed hundreds of thousands and would kill more. It is difficult to choose between Hussein and Gadhafi. Regardless of the United States’ other motivations in both conflicts, it would seem that those who favor humanitarian intervention would have favored the Iraq war. That they generally opposed the Iraq war from the beginning requires a return to the concept of immaculate intervention.

Hussein was a war criminal and a danger to his people. However, the American justification for intervention was not immaculate. It had multiple reasons, only one of which was humanitarian. Others explicitly had to do with national interest, the claims of nuclear weapons in Iraq and the desire to reshape Iraq. That it also had a humanitarian outcome — the destruction of the Hussein regime — made the American intervention inappropriate in the view of those who favor immaculate interventions for two reasons. First, the humanitarian outcome was intended as part of a broader war. Second, regardless of the fact that humanitarian interventions almost always result in regime change, the explicit intention to usurp Iraq’s national self-determination openly undermined in principle what the humanitarian interveners wanted to undermine only in practice.

Other Considerations

The point here is not simply that humanitarian interventions tend to devolve into occupations of countries, albeit more slowly and with more complex rhetoric. It is also that for the humanitarian warrior, there are other political considerations. In the case of the French, the contrast between their absolute opposition to Iraq and their aggressive desire to intervene in Libya needs to be explained. I suspect it will not be.

There has been much speculation that the intervention in Libya was about oil. All such interventions, such as those in Kosovo and Haiti, are examined for hidden purposes. Perhaps it was about oil in this case, but Gadhafi was happily shipping oil to Europe, so intervening to ensure that it continues makes no sense. Some say France’s Total and Britain’s BP engineered the war to displace Italy’s ENI in running the oil fields. While possible, these oil companies are no more popular at home than oil companies are anywhere in the world. The blowback in France or Britain if this were shown to be the real reason would almost certainly cost French President Nicolas Sarkozy and British Prime Minister David Cameron their jobs, and they are much too fond of those to risk them for oil companies. I am reminded that people kept asserting that the 2003 Iraq invasion was designed to seize Iraq’s oil for Texas oilmen. If so, it is taking a long time to pay off. Sometimes the lack of a persuasive reason for a war generates theories to fill the vacuum. In all humanitarian wars, there is a belief that the war could not be about humanitarian matters.

Therein lays the dilemma of humanitarian wars. They have a tendency to go far beyond the original intent behind them, as the interveners, trapped in the logic of humanitarian war, are drawn further in. Over time, the ideological zeal frays and the lack of national interest saps the intervener’s will. It is interesting that some of the interventions that bought with them the most good were carried out without any concern for the local population and with ruthless self-interest. I think of Rome and Britain. They were in it for themselves. They did some good incidentally.

My unease with humanitarian intervention is not that I don’t think the intent is good and the end moral. It is that the intent frequently gets lost and the moral end is not achieved. Ideology, like passion, fades. But interest has a certain enduring quality. A doctrine of humanitarian warfare that demands an immaculate intervention will fail because the desire to do good is an insufficient basis for war. It does not provide a rigorous military strategy to what is, after all, a war. Neither does it bind a nation’s public to the burdens of the intervention. In the end, the ultimate dishonesties of humanitarian war are the claims that “this won’t hurt much” and “it will be over fast.” In my view, their outcome is usually either a withdrawal without having done much good or a long occupation in which the occupied people are singularly ungrateful.

North Africa is no place for casual war plans and good intentions. It is an old, tough place. If you must go in, go in heavy, go in hard and get out fast. Humanitarian warfare says that you go in light, you go in soft and you stay there long. I have no quarrel with humanitarianism. It is the way the doctrine wages war that concerns me. Getting rid of Gadhafi is something we can all feel good about and which Europe and America can afford. It is the aftermath — the place beyond the immaculate intervention — that concerns me.

This report is republished with permission of STRATFOR.

What Happened to the American Declaration of War?

Another outstanding article from George Friedman which touches on some of the deeper aspects of American governance with regard to the current engagement in Libya.

By George Friedman

In my book “The Next Decade,” I spend a good deal of time considering the relation of the American Empire to the American Republic and the threat the empire poses to the republic. If there is a single point where these matters converge, it is in the constitutional requirement that Congress approve wars through a declaration of war and in the abandonment of this requirement since World War II. This is the point where the burdens and interests of the United States as a global empire collide with the principles and rights of the United States as a republic.

World War II was the last war the United States fought with a formal declaration of war. The wars fought since have had congressional approval, both in the sense that resolutions were passed and that Congress appropriated funds, but the Constitution is explicit in requiring a formal declaration. It does so for two reasons, I think. The first is to prevent the president from taking the country to war without the consent of the governed, as represented by Congress. Second, by providing for a specific path to war, it provides the president power and legitimacy he would not have without that declaration; it both restrains the president and empowers him. Not only does it make his position as commander in chief unassailable by authorizing military action, it creates shared responsibility for war. A declaration of war informs the public of the burdens they will have to bear by leaving no doubt that Congress has decided on a new order — war — with how each member of Congress voted made known to the public.

Almost all Americans have heard Franklin Roosevelt’s speech to Congress on Dec. 8, 1941: “Yesterday, Dec. 7, 1941 — a date which will live in infamy — the United States of America was suddenly and deliberately attacked by naval and air forces of the Empire of Japan … I ask that the Congress declare that since the unprovoked and dastardly attack by Japan on Sunday, Dec. 7, a state of war has existed between the United States and the Japanese Empire.”

It was a moment of majesty and sobriety, and with Congress’ affirmation, represented the unquestioned will of the republic. There was no going back, and there was no question that the burden would be borne. True, the Japanese had attacked the United States, making getting the declaration easier. But that’s what the founders intended: Going to war should be difficult; once at war, the commander in chief’s authority should be unquestionable.

Forgoing the Declaration

It is odd, therefore, that presidents who need that authorization badly should forgo pursuing it. Not doing so has led to seriously failed presidencies: Harry Truman in Korea, unable to seek another term; Lyndon Johnson in Vietnam, also unable to seek a new term; George W. Bush in Afghanistan and Iraq, completing his terms but enormously unpopular. There was more to this than undeclared wars, but that the legitimacy of each war was questioned and became a contentious political issue certainly is rooted in the failure to follow constitutional pathways.

In understanding how war and constitutional norms became separated, we must begin with the first major undeclared war in American history (the Civil War was not a foreign war), Korea. When North Korea invaded South Korea, Truman took recourse to the new U.N. Security Council. He wanted international sanction for the war and was able to get it because the Soviet representatives happened to be boycotting the Security Council over other issues at the time.

Truman’s view was that U.N. sanction for the war superseded the requirement for a declaration of war in two ways. First, it was not a war in the strict sense, he argued, but a “police action” under the U.N. Charter. Second, the U.N. Charter constituted a treaty, therefore implicitly binding the United States to go to war if the United Nations so ordered. Whether Congress’ authorization to join the United Nations both obligated the United States to wage war at U.N. behest, obviating the need for declarations of war because Congress had already authorized police actions, is an interesting question. Whatever the answer, Truman set a precedent that wars could be waged without congressional declarations of war and that other actions — from treaties to resolutions to budgetary authorizations — mooted declarations of war.

If this was the founding precedent, the deepest argument for the irrelevancy of the declaration of war is to be found in nuclear weapons. Starting in the 1950s, paralleling the Korean War, was the increasing risk of nuclear war. It was understood that if nuclear war occurred, either through an attack by the Soviets or a first strike by the United States, time and secrecy made a prior declaration of war by Congress impossible. In the expected scenario of a Soviet first strike, there would be only minutes for the president to authorize counterstrikes and no time for constitutional niceties. In that sense, it was argued fairly persuasively that the Constitution had become irrelevant to the military realities facing the republic.

Nuclear war was seen as the most realistic war-fighting scenario, with all other forms of war trivial in comparison. Just as nuclear weapons came to be called “strategic weapons” with other weapons of war occupying a lesser space, nuclear war became identical with war in general. If that was so, then constitutional procedures that could not be applied to nuclear war were simply no longer relevant.

Paradoxically, if nuclear warfare represented the highest level of warfare, there developed at the lowest level covert operations. Apart from the nuclear confrontation with the Soviets, there was an intense covert war, from back alleys in Europe to the Congo, Indochina to Latin America. Indeed, it was waged everywhere precisely because the threat of nuclear war was so terrible: Covert warfare became a prudent alternative. All of these operations had to be deniable. An attempt to assassinate a Soviet agent or raise a secret army to face a Soviet secret army could not be validated with a declaration of war. The Cold War was a series of interconnected but discrete operations, fought with secret forces whose very principle was deniability. How could declarations of war be expected in operations so small in size that had to be kept secret from Congress anyway?

There was then the need to support allies, particularly in sending advisers to train their armies. These advisers were not there to engage in combat but to advise those who did. In many cases, this became an artificial distinction: The advisers accompanied their students on missions, and some died. But this was not war in any conventional sense of the term. And therefore, the declaration of war didn’t apply.

By the time Vietnam came up, the transition from military assistance to advisers to advisers in combat to U.S. forces at war was so subtle that there was no moment to which you could point that said that we were now in a state of war where previously we weren’t. Rather than ask for a declaration of war, Johnson used an incident in the Tonkin Gulf to get a congressional resolution that he interpreted as being the equivalent of war. The problem here was that it was not clear that had he asked for a formal declaration of war he would have gotten one. Johnson didn’t take that chance.

What Johnson did was use Cold War precedents, from the Korean War, to nuclear warfare, to covert operations to the subtle distinctions of contemporary warfare in order to wage a substantial and extended war based on the Tonkin Gulf resolution — which Congress clearly didn’t see as a declaration of war — instead of asking for a formal declaration. And this represented the breakpoint. In Vietnam, the issue was not some legal or practical justification for not asking for a declaration. Rather, it was a political consideration.

Johnson did not know that he could get a declaration; the public might not be prepared to go to war. For this reason, rather than ask for a declaration, he used all the prior precedents to simply go to war without a declaration. In my view, that was the moment the declaration of war as a constitutional imperative collapsed. And in my view, so did the Johnson presidency. In hindsight, he needed a declaration badly, and if he could not get it, Vietnam would have been lost, and so may have been his presidency. Since Vietnam was lost anyway from lack of public consensus, his decision was a mistake. But it set the stage for everything that came after — war by resolution rather than by formal constitutional process.

After the war, Congress created the War Powers Act in recognition that wars might commence before congressional approval could be given. However, rather than returning to the constitutional method of the Declaration of War, which can be given after the commencement of war if necessary (consider World War II) Congress chose to bypass declarations of war in favor of resolutions allowing wars. Their reason was the same as the president’s: It was politically safer to authorize a war already under way than to invoke declarations of war.

All of this arose within the assertion that the president’s powers as commander in chief authorized him to engage in warfare without a congressional declaration of war, an idea that came in full force in the context of nuclear war and then was extended to the broader idea that all wars were at the discretion of the president. From my simple reading, the Constitution is fairly clear on the subject: Congress is given the power to declare war. At that moment, the president as commander in chief is free to prosecute the war as he thinks best. But constitutional law and the language of the Constitution seem to have diverged. It is a complex field of study, obviously.

An Increasing Tempo of Operations

All of this came just before the United States emerged as the world’s single global power — a global empire — that by definition would be waging war at an increased tempo, from Kuwait, to Haiti, to Kosovo, to Afghanistan, to Iraq, and so on in an ever-increasing number of operations. And now in Libya, we have reached the point that even resolutions are no longer needed.

It is said that there is no precedent for fighting al Qaeda, for example, because it is not a nation but a subnational group. Therefore, Bush could not reasonably have been expected to ask for a declaration of war. But there is precedent: Thomas Jefferson asked for and received a declaration of war against the Barbary pirates. This authorized Jefferson to wage war against a subnational group of pirates as if they were a nation.

Had Bush requested a declaration of war on al Qaeda on Sept. 12, 2001, I suspect it would have been granted overwhelmingly, and the public would have understood that the United States was now at war for as long as the president thought wise. The president would have been free to carry out operations as he saw fit. Roosevelt did not have to ask for special permission to invade Guadalcanal, send troops to India, or invade North Africa. In the course of fighting Japan, Germany and Italy, it was understood that he was free to wage war as he thought fit. In the same sense, a declaration of war on Sept. 12 would have freed him to fight al Qaeda wherever they were or to move to block them wherever the president saw fit.

Leaving aside the military wisdom of Afghanistan or Iraq, the legal and moral foundations would have been clear — so long as the president as commander in chief saw an action as needed to defeat al Qaeda, it could be taken. Similarly, as commander in chief, Roosevelt usurped constitutional rights for citizens in many ways, from censorship to internment camps for Japanese-Americans. Prisoners of war not adhering to the Geneva Conventions were shot by military tribunal — or without. In a state of war, different laws and expectations exist than during peace. Many of the arguments against Bush-era intrusions on privacy also could have been made against Roosevelt. But Roosevelt had a declaration of war and full authority as commander in chief during war. Bush did not. He worked in twilight between war and peace.

One of the dilemmas that could have been avoided was the massive confusion of whether the United States was engaged in hunting down a criminal conspiracy or waging war on a foreign enemy. If the former, then the goal is to punish the guilty. If the latter, then the goal is to destroy the enemy. Imagine that after Pearl Harbor, FDR had promised to hunt down every pilot who attacked Pearl Harbor and bring them to justice, rather than calling for a declaration of war against a hostile nation and all who bore arms on its behalf regardless of what they had done. The goal in war is to prevent the other side from acting, not to punish the actors.

The Importance of the Declaration

A declaration of war, I am arguing, is an essential aspect of war fighting particularly for the republic when engaged in frequent wars. It achieves a number of things. First, it holds both Congress and the president equally responsible for the decision, and does so unambiguously. Second, it affirms to the people that their lives have now changed and that they will be bearing burdens. Third, it gives the president the political and moral authority he needs to wage war on their behalf and forces everyone to share in the moral responsibility of war. And finally, by submitting it to a political process, many wars might be avoided. When we look at some of our wars after World War II it is not clear they had to be fought in the national interest, nor is it clear that the presidents would not have been better remembered if they had been restrained. A declaration of war both frees and restrains the president, as it was meant to do.

I began by talking about the American empire. I won’t make the argument on that here, but simply assert it. What is most important is that the republic not be overwhelmed in the course of pursuing imperial goals. The declaration of war is precisely the point at which imperial interests can overwhelm republican prerogatives.

There are enormous complexities here. Nuclear war has not been abolished. The United States has treaty obligations to the United Nations and other countries. Covert operations are essential, as is military assistance, both of which can lead to war. I am not making the argument that constant accommodation to reality does not have to be made. I am making the argument that the suspension of Section 8 of Article I as if it is possible to amend the Constitution with a wink and nod represents a mortal threat to the republic. If this can be done, what can’t be done?

My readers will know that I am far from squeamish about war. I have questions about Libya, for example, but I am open to the idea that it is a low-cost, politically appropriate measure. But I am not open to the possibility that quickly after the commencement of hostilities the president need not receive authority to wage war from Congress. And I am arguing that neither the Congress nor the president have the authority to substitute resolutions for declarations of war. Nor should either want to. Politically, this has too often led to disaster for presidents. Morally, committing the lives of citizens to waging war requires meticulous attention to the law and proprieties.

As our international power and interests surge, it would seem reasonable that our commitment to republican principles would surge. These commitments appear inconvenient. They are meant to be. War is a serious matter, and presidents and particularly Congresses should be inconvenienced on the road to war. Members of Congress should not be able to hide behind ambiguous resolutions only to turn on the president during difficult times, claiming that they did not mean what they voted for. A vote on a declaration of war ends that. It also prevents a president from acting as king by default. Above all, it prevents the public from pretending to be victims when their leaders take them to war. The possibility of war will concentrate the mind of a distracted public like nothing else. It turns voting into a life-or-death matter, a tonic for our adolescent body politic.

This report is republished with permission of STRATFOR.

Cloning: Not a viable business model

There has been a bit of talk over the last few decades about cloning technologies and the idea that we are technically capable of human cloning at the present time. One way of generating public interest in the mass media when there isn’t much to talk about is to resort to the scary-technology-future schtick. While the achievement of human cloning is noteworthy from a technical standpoint, visions of a eutopian/nightmare scenario in which vast numbers of human clones are produced to further a societal, military or economic end are simply not based in reality.

Humans have evolved the unique ability to adapt our environments to ourselves. This is the opposite of what other organisms have evolved to be capable of. That capability is built on the back of a number of significant human traits. To name a few: opposable thumbs, high intelligence, conscious imagination, multiple-layered memory, the ability to codify and classify our imaginings, complex inter-organism communications, high-order emotional responses and memory, and the critical ability to self-organize into super-organisms. It reads a bit like a product feature list, or more interestingly, a list of Unix-style package dependencies. There is no single trait which can grant the ability to do what humans spend most of their time doing, and there is no magic formula which can accurately model and explain human behavior.

The evolutionary pressures necessary to produce humanity in its present form are varied, complex and largely unknowable at the present time. That humans have ultimately come out of the process is nothing short of miraculous — at least by our present understanding. (On the other hand, strict observation of the anthropic principle forces us to abandon the notion that what has happened on Earth could not have happened elsewhere — and carrying this to a logical conclusion, if the universe is in fact infinite (or, stated another way, if the multiverse is infinitely multifaceted), then it must have occured somewhere else any number of times. Whether the universe/multiverse/innerverse/whatever-verse is infinite is, of course, a subject of debate.)

Cloning, in essence, locks in whatever changes have occured in the target organism indefinitely. This sets the cloned product outside of the world of evolutionary pressure and places it directly into the world of pure economic product — which is subject to the forces of supply and demand. At the present time people enjoy reading emotionally charged imaginings about mass clone scenarios, and yet the same people enjoy reading emotionally charged imaginings about the supposed over population of the Earth — in both cases produced and marketed by the same media organizations (whose business is marketing their product, not understanding applied technology).

If the world is overpopulated then we have no need for clones, because the expense of cloning will not provide a benefit any greater than that of recruiting existing humans who were produced at no burden to whoever the employer is in the scenario. Leaving the burden of (re)production, rearing, education, etc. on a family model (be it nuclear, polygamist, polyamorous, broken home, hooker bastard spawn, whatever) provides available humans at an enormous discount compared to any commercial cloning operation and is therefore the correct market option. This leaves the only commercial viable cloning options to be niche in nature at best. Rich men who really want to buy exactly 5 copies of their favorite shower girl may provide a tiny market of this nature, but there is no guarantee that all five clones will agree with whatever the job at hand winds up being, that the purchaser will be alive and remain interested in the project long enough to see it come to fruition (over a decade), or that the nature of the market will not change enormously before completion. (The ready availability of multiple-birth natural clones (twins, triplets, etc.) has not produced a similar market in any case outside of a very small niche in adult services, and that market already apears to be saturated. It turns out that variety tends to be the greatest male aphrodesiac anyway.)

So this leaves what? Very little market for one of the few proposed uses of clones.

The military has no use for clones over what use it already gains from mass human screenings of naturally evolved humans who do not come with the large overhead of a human cloning program attached. The idea that the military wants identical soldiers is flawed to begin with, however. The U.S. Army has a deep recognition of the benefits of having a hugely diverse fighting force and would not want to sacrfice those advantages in exchange for another budgetary drain the institutional burden of becoming Dad for a large number of clones — who may decide that they have better things to do than serve Washington once they have all the big guns anyway. War is a highly emotional experience and the support provided by soldiers between soldiers and the culture that has evolved within the military because of this is almost as complex to understand as the phenomenon of humanity to begin with. Trying to successfully replicate or replace such a complex system that already exists, works well and is free with one which does not yet exist and might fail at enormous cost would be a very difficult thing to pitch to taxpayers.

One again, this leaves very little potential market where the imagination has a fun time seeing one.

The only viable cloning market for the forseeable future would be in organ production and harvesting. There are a few reasons in this market why human clones will never be viable products as well, however. Once again, the expense and time required to clone a human is already equal to the human who is in need of a bio replacement in the first place, the primary difference between the clone and the natural human being that the existing human would already be rich and well enfranchised to be in a position to order a clone from which to harvest his needed spare parts (and the clone, obviously, would not). This conjures up images of a really fun movie from a few years ago, “The Island”, which told the story of two clones produced for the purposes of organ replacement suddenly realizing what they are and deciding that such a short future wasn’t really for them. But that is the movies. Back in the world of reality we already have the technology to clone human organs, and these organ clones do not require fully human hosts. It is possible to grow a human ear on the back of a lab rat, a human heart inside of a pig, and likely other parts on other hosts which are faster and far cheaper to maintain and harvest than human clones would be.

Once again, no market here, either.

Medical testing is another area where I’ve heard talk of mass human cloning. Perfect test subjects, so some claim. But these are only perfect test subjects on the surface. Identical people are not perfect test subjects in the slightest when it comes to medical testing. The most important aspect of drug, allergy, ergonomics, althetic tolerance, etc. medical testing is the statistical significance of the test group. The word “group” here is everything. Testing clones would merely provide the same person for testing a number of times, which amounts to just testing the same person ad nauseam at enormous expense for no gain. Humanity is a varied and evolving thing, and medical advancements must take that into account or else those advancements themselves become useless and thereby unmarketable.

Sorry sci-fi fans, no market here, either.

For the same reasons that medical testing on clones is useless so is an entire society created from clones. A clone society is instantly susceptible to lethal mass epedemics from every vector. It is very likely that a flu that kills one person would kill them all, whereas natural humanity tends to be largely resistant to every pathogen in nature (and even engineered ones) when taken as a whole. Though humans may suffer to vary degrees independently of one another due to individual variations, those individual variations when combined and spread across the masses of humanity provide an overwhelmingly powerful insurance against the mass extinction of humanity. A cloned society removes this ultimate protection at its root and leaves the population totally naked as a whole. Contemplating these realities means contemplating one’s own mortality and relative insignificance, and I imagine that is a likely reason why people don’t think about such things when they see scary stories on TV or the internet about future dystopic scenarios of a planned Earth-wide all-clone society (a la some Illuminati conspiracy variants).

So all-clone society? Just not workable. Not just economically unviable, a downright unsafe way to try to manage humanity.

So why all the fuss? Mainly because there is a big market in generating public drama around new technologies which the general public does not yet fully understand or have any practical contact with (yet). The technologies required to achieve a human clone are significant, but they will manifest themselves in the market (and already do) in very different ways than current popular media proposes.