Google+ Badge

Facebook

Friday, July 10, 2009

Book Review: "Palimpsest" by Charles Stross

The new collection of short stories by Charles Stross, "Wireless" is full of good work; many of the stories will be familiar to his fans because they've been previously published. But the longest story of all, a novella titled "Palimpsest" hasn't been publicly available to readers before. It's primarily why I bought the book, and I'm very happy indeed that I did. Let me tell you why. The next section is a spoiler-free introduction to this review. The section after that, where I get into more detail, is full of spoilers: you have been warned.

Fair Warning Disclosure: I've met and drunk beer with Charlie Stross, and been a regular commenter on his blog over the last couple of years. It's likely though that I would never have met him if I hadn't been delighted by his writing first. This review is the result of a great story, not a friendship.

Intro - No Spoilers
"Palimpsest" is a time-travel story in the tradition of Poul Anderson's "Time Patrol", Fritz Leiber's "Change War", and Isaac Asimov's "The End of Eternity". And there are definite resonances with Robert Heinlein's "All you Zombies", David Gerrold's "The Man Who Folded Himself", and Chris Roberson's "Here, There, and Everywhere". I think it most closely resembles Anderson's stories, as I'll explain in detail with spoilers later. In fact, there's a reference to the "Time Patrol" series in one character's name. There are also several themes that were important in Anderson's stories here: the solitude and alienation of a temporal operative outside their own time, the moral dilemmas that result from being able to change history, and the deeper questions that need to be asked when you start wondering which history is the "real" one.

But this isn't an homage or a copy; Stross goes deeper here into these themes than Anderson did. He also spends more time and energy on the "how" of time travel, setting up a theory of physics that consistently supports the plot and the themes he's covering. Now I'm a geek when it comes to far-out theories of physics, and I'm fascinated by questions about the nature of time, and the possibility of time travel, so this is catnip for me; YMMV. But there are some twists and turns in the theory here that made me sit back and think about all the time travel stories I've read, and what their assumptions about the nature of time imply about the use and abuse of time travel. At that point, I think I've gone beyond a book review, so I'll save most of that thinking for a follow-up post, tentatively titled "On the Paraphysics of Time Travel".

In an afterward, Stross says,
"Palimpsest" really wanted to be a novel. It really, really wanted to be a novel. Maybe it will be, someday.
I agree that it should be, and hope one day that it will. There are characters and settings which deserve more description and development than they have.


Here be there Spoilers
"Palimpsest" covers about as much scope in space, time, and human actions as is possible. The story begins and ends in the second person, talking to the man who is the protagonist and point of view, Pierce. In between it follows Pierce's career from recruitment into the trans-temporal organization called "Stasis", through his 20-year training, and on into active duty. Stasis' mission is to preserve the existence of humanity, despite extinction events, and in the face of the eventual evolution of the sun into a red giant that will destroy Earth. Along the way it describes, in chapters reminiscent of Olaf Stapledon's "Star Maker", several alternative histories of the Solar System (with some background on the rest of the universe). In the course of his training and work Pierce studies and helps change the course of civilizations, and helps save humanity from one of many extinctions. But in this story, that's all background; the prime mover of the plot is, ultimately, the question that Pierce must answer: who does he want to be, what history does he want for himself, the answer to which question turns out to be have a rather large bearing on the history of all humanity.

Time travel has a very SFnal problem: if you want an interesting story, you have to go beyond the science we know, and make up something. It's gotten more interesting since the 1970's with the discovery of implications in General Relativity that time travel into the past may be possible given certain extreme conditions, where before scientists were fairly certain that was impossible. There still aren't any time travel mechanisms that are known to be workable with physically feasible technology, and the ones that have been proposed are limited in range to the period of the existence of the time travel device, so writers have to either resort to some (large) amount of handwaving in describing how time travel works, or simply say, "trust me, it works, let's get to the story". Stross has chosen to describe a particular mechanism involving wormholes and singularities that isn't close to any current science, but that allows for the kinds of abilities and problems that Stasis has. It allows time travelers to change history without affecting themselves (the first task of a recruit is to kill his own grandfather before he has children), and for multiple changes to be made in a given event without the outcome of the changes to be known for certain from some viewpoints.

Stasis is similar to Anderson's "Time Patrol" in that it is a trans-temporal organization that has accepted responsibility for patrolling and controlling history. Like the Time Patrol, it is run by dimly-seen and poorly-understand people from some far future time; it has a clearly-stated mission, but the operations it runs are not always clearly related to that mission. Stasis, like the Patrol, recruits from all eras, sends its recruits to an Academy in a remote time, and then stations most agents in a time period around their own origins. And there are, as in the Time Patrol, senior agents whose job it is to watch over history and take action when necessary to correct even the actions of other Stasis agents. The differences arise because Anderson never went very deeply into the overall organization and goals of the Patrol, preferring to concentrate on the work of his protagonist Unattached Agent Manson Everard (whose name is referenced in "Palimpsest" by the name of the head of the Stasis Academy, Agent Manson; the fact that this is his only name, and the acts he and Stasis require of recruits and students, may also make it a reference to Charles Manson).

As always for Stross, it's not just about the hardware or action, it's about the people and how they interact on personal and political levels. Pierce has questions about how Stasis works, and what it does. Those questions get more urgent even before he finishes training, when he is sent to a primitive time on a supposedly routine cover for the extraction of another agent. Pierce is surprised to discover that he other agent is a prior instance of a former lover, an instructor at the Stasis Academy, who has not yet met him on her lifeline. Before the extraction can be completed, a multi-way firefight with advanced weapons kills Pierce several times despite frantic changes in the past of the battle, and eventually leaves him seriously injured. Are there factions in Stasis? Is there another trans-temporal organization? And why try to kill him, a lowly trainee?

The questions become more complicated when Pierce is interrogated by an agent of Stasis Internal Affairs. who looks exactly like Franz Kafka, author and paranoid. Is a future Pierce suspected of trying to assassinate himself? Kafka says he is not, but also says he will be talking to Pierce again.

Pierce' view of Stasis and his own part in it becomes more morally ambiguous, and his own history and future become more and more uncertain, as Pierce learns more of how time travel works, and participates in more and more operations. His adopted home in an advanced time, where he was a husband and father, and respected as a Stasis agent, is lost to him in the manifold potential and alternative histories, and there is some suggestion that Internal Affairs is holding the knowledge of how to find it again as a hostage for Pierce' good behavior. He learns that there are rogue agents, striking out to create empires in time, or to overthrow Stasis entirely. There may be factions within Stasis, battling over the tactics, and even the objectives, of the service. All or none, or some combination, of these may or may not exist, and may or may not be defeated by Internal Affairs, depending on when he currently is, and how the most recent temporal battles were resolved.

Stasis is the ultimate panopticon society, extended over all of humanity for all of history. That might be a good thing, because it allows all of human history and experience to be stored in the Final Library, an archive trillions of years in the future. But is the cost too high? Has Stasis, and in particular, Internal Affairs reduced human history to the limited stage of a single solar system, where it plays out a repetitive cycle of extinction, followed by reseeding of primitive tribes which evolve into advanced societies and are again made extinct? Is there a better way that involves humanity expanding into the Universe, making itself immortal by making sure no extinction can be widespread enough to kill it completely?

Stross has often written warnings about the panopticon society we in the West are building now; here he shows how the ultimate panopticon, based on a timid need for security in the face of the potential threats of the Universe we live in, could control humanity for its entire future. But, given the nature of time and time travel he uses here, the outcome is never certain until no one's left to go back and try to change it, and a time traveler can choose to completely revise their own life in the light of later discoveries. Pierce must finally make a decision about what he is willing to do, and how he wants the world to work, and is allowed by the technology he has available to change the past to support that decision. We know that he might succeed, although it is never certain that he will, even when we see him return to free himself of his past decisions. Accepting that humans must have free will, and accepting the responsibility for using that will, as Pierce does, means the outcome cannot be determined beforehand.

In the end, "Palimpsest" delivers a lot of sensawunda, as it jumps around in alternative time, showing some possible histories of the Earth and humanity. It also delivers several interesting characters. Foremost is Pierce, whom we watch grow over several decades from a teenage recruit to a veteran of life inside Stasis as well as in domestic life. His wife and his lover in Stasis are not as fully developed, and this is one limitation of the story that could be improved by expansion to novel length. I'd like to see Kafka developed a little as a character as well; at present he's an Ominous Presence rather than a person.

It would also be helpful to see more of Stasis' operations, to show in more detail how they use the ability to change the past on the fly. As it is, I think I see what's going on, but there still is a bit of handwaving going on (for instance in the battle where Pierce is killed several times), and I think the inter-time tactics should be clearer, since the climax of the story, when Pierce finds out who the rebel faction within Stasis is, depends on it.

But first and foremost, I want more of this story to read. If time travel stories interest you, I think you'll feel the same way.


Wednesday, July 8, 2009

And the Season Ends in a Blaze of Glory!

Executive version: Feeling great, on with my life!

I just got home from my first post-op exam by the PA of the surgeon who did the back surgery on me 3 weeks ago. I've been feeling really good about things since I came out of the anesthesia after the surgery, despite being told then, "Oh, yes, we worked for 6 1/2 hours, why did you think we expected 3?" Turns out they really expected a fair amount of work, but a good outcome, and I was away for those hours and didn't notice, so we'll let that one pass.

Post-op support was terrific; the nursing staff was professional and friendly (and very careful about the tordol injections they used for pain relief; if you don't push those with extreme care they burn like a mo-fo, and I barely felt even the ones given by the student nurse). Both rehab therapy and occupational therapy were immensely helpful in showing me how to use the brace and the walker, and going through the way our house is set up (I brought photos) to talk about how to deal with obstructions and such.

Progress has been ahead of schedule since then; I only spent 3 days post-op in the hospital instead of the expected 4, and I've been getting around with a walker and a back brace since then. The walker is getting less necessary, though I'll need the back brace for the next six months or so as the spine fuses into its final shape. Comparison x-rays show the spine is properly aligned; the incision wound is healed (and the rash that I had on it was a fairly common side-effect of the surgical adhesive they used to close the wound; the rash is gone now).

So I'm declaring victory and holding a parade. I can drive as long as I don't need to use the pain pills (and I haven't for almost a week), I can make a cup of tea and carry it into the living room or out on the deck to sit and read or watch the trees and the birds, and my mobility is improving on a daily basis.

Saturday, June 13, 2009

Almost the End of Surgery Season

After two highly successful surgeries, one a gum transplant and the other bilateral carpal tunnel release, I've got one more to go. On Monday I go in for a lumbar decompression and fusion. This is the big one, because it's not day surgery; I'll be in the hospital for 3 to 4 days, and the recovery time is weeks. In the past, I've usually posted to the web communities I'm active in before the surgery to let them know I'll be offline for awhile. This time I'm simply going to go in, and then start posting again when I can, probably after a week or so. I'm posting to this blog so someone who's really wondering where I've gotten to can find out.

In case anyone does come by to find out where I am, I want you to know that this is not risky or exotic surgery, and the actual techniques are a lot better (read: less invasive) than they used to be. And unlike the last time I had back surgery, in 2001, I've been really pleased with the pre-op support of the doctor and the hospital, and the really good planning for the hospital stay and subsequent therapy. So I have a lot of reason to expect a good outcome.

Se you in a week or so.

Tuesday, June 2, 2009

Parallel Objects Never Meet: What to do with all those cores

In the last few years the steady increase of microprocessor clock rates and decrease of power consumption has leveled out, due to physical causes that IC designers haven't been able to overcome. This change in a decades-long trend meant that companies like Intel, whose business model is based on continually increasing the performance of their products by following Moore's Law towards ever smaller individual features, and thus ever larger numbers of transistors on the silicon their chips are made of, have had to find another way to leverage the increased number of transistors to improve performance other than by adding more memory and computing elements that go faster. The strategy that Intel, and other companies trailing it down the Moore's Law curve, has taken is to freeze the speed and functionality of each processor, and then put multiple processors, called "cores" on each chip. The current generation of chips on the market has 4 or 6 cores per chip, with 8 core chips coming soon. Chip designs capable of 64 cores have been demonstrated in the lab.

Of course, it's not that simple to take advantage of those extra cores with existing software. Parallel processing was a very active field of research in the 1980's, but the conclusions about that research were that new programming languages would need to be needed, and all new software would have to be written, with the programmers having to control the parallelism of their code explicitly. Moreover, most research was in specialized application domains like numeric programming, and computer vision, where the required parallelism fell easily out of the problem descriptions. But most computing from the 90's on was business or personal computing, not scientific, so the chip manufacturers didn't see parallel computer architectures as an important part of their market. Parallel processing fell out of vogue even in research, and very little money was available for research and development in the commercial sector of the computer industry.

The rise of the web application and the development of large scale application servers provided one way to use parallelism for business and personal computers: since any given server is doing pretty much the same thing, only with different data, for each web client it serves, it's not hard to build server software that can take advantage of large numbers of processors. Server computers have been built with multiple processors for more than 10 years now; using multiple-core chips isn't difficult.

But that doesn't help the large number of personal and business desktop and laptop (and soon palmtop) computers. Utilizing 2 cores isn't too hard, and 4 cores can be kept somewhat busy, but after that it's not clear that there's a lot of benefit to more cores without completely rewriting the software, a monumental and very costly task for most computer applications. It's not surprising then that a lot of research and money is going into looking for ways to reduce the amount of rewriting necessary. Some of the ideas being investigated include extensions to existing programming languages to allow explicit control of parallelism and runtime support libraries that do the work of synchronizing and controlling code running on multiple processors.

I think there's another way to do the job, one that may even allow many existing programs to be just be recompiled, and will almost certainly allow tools to examine and modify or flag code that needs to be changed to take advantage of multiple core. It's based on the fundamental nature of object-oriented programming, in which an object is an encapsulated piece of computation, including code and data.

When I first learned about objects, they were described to me as conceptually little computers, with inputs and outputs. Each object could be, at least in theory, be run on a separate computer, as long as their internal states were not accessible to other objects. In some of the first object-oriented languages this was the basic model of computation; there was no way for one object to access another object's data, the only thing an object could do was send a message to another object and receive a reply. That model works very nicely in a parallel environment because the message-sending mechanism can have synchronization between processors or even distributed computers built into it.

In fact, there are several object-oriented environments that allow a message to be passed to a local object on the same computer as the sender, or to an object on another computer, depending on where the destination object is running. It's even possible for an object to move between computers and have its messages follow it correctly, without any change in the program that the sender or receiver are running. Smalltalk, for instance, does not allow one object to access another's data, only message passing is allowed between objects, so implementing such a system is relatively easy. Java does allow an object of a given class to access the data in an object of that same class, so it's not as easy. The difference between the two languages is that in Smalltalk any object of any class can be moved around, whereas in Java an interface has to be designed for the destination class, and the sending object has to use a reference of that interface type as the destination object; an arbitrary class cannot be used. And, of course, non-object-oriented languages need not apply.

Unfortunately this style of program design is not common even among object programmers. Most OO languages don't have any way to prevent an object of a given class from accessing the data of another object of that class, and many programmers have fallen into the habit of using this feature on the assumption that direct data access is faster than messaging in the local case. In fact, most programmers never think of the remote case unless their program requirements include it. So most OO programs have inter-object data accesses, and often global data (such as class variables) that would have to be modified into message sends or replicated and synchronized data.

With some languages this may not be a great problem: the semantics of the language allow a smart compiler or runtime support system to recognize the data accesses and convert them into message sends. Global data accesses can be converted into message sends to class objects. This allows objects to communicate even if there is no shared memory area for global data. Smalltalk would require a minor modification to the compiler to handle class variable accesses. Java would require a new compiler to detect and modify data accesses. Both languages could benefit from compiler and runtime support to detect object dependencies and optimize the placement of objects in memory spaces and the grouping of obejcts in threads or processes. Python might require some constraints on use, as it has some non-object functionality in it (the module namespace might cause some trouble, for instance), but a suitably jiggered compiler might be able to catch the problems, and perhaps even fix them. CLOS (Common Lisp Object System) would be a good language to try; I suspect that some features of the base Lisp would have to be constrained (macros, for instance).

The more impure object languages like C++ are problematic; the more ways there are to code without or in spite of objects, the harder it will be to find all the dependencies that would break correct parallelism, and the more the culture of the languages users is likely to cause them to reject the idea. That's fine with me because I'm not a fan of those languages (because of having been a C++ programmer for years, and a member of the C++ language standard committee). All the arguments I've heard for low-level languages have left me cold: I don't believe that the language I use has to look like the inside of a crippled von Neumann machine because I don't see any advantage to that. Making it difficult to do things in a natural way because that's "good programming discipline" is just silly1. And the efficiency and performance arguments are based on not understanding the alternatives; a good Lisp compiler can give a C program a run for its money any day.

I'm curious to see if there are software developers, programming language and compiler experts, or parallel system gurus out there who can tell me why this scheme won't work, or why it's obviously less practical than any other scheme for making use of multiple cores for existing software applications, not system software.

1. To me, programming languages are user interfaces to computer capabilities. Making a bad interface deliberately is downright perverse in my view, not matter what the justification. Instead of forcing the programmer to bend to some model of the computer, why not spend time and energy finding ways of optimizing programs written in user-friendly languages?


Monday, May 18, 2009

The Measure of Money

A recent post on Near Future Labs discusses how the notion of money as a quantity arose, and asks how that affects design. That started me thinking about the way the scientific notion of measure and quantity has changed in the last couple of centuries, and how that might affect design. You should read the original post first, to set the context.

There was another reason for quantification in the natural sciences that didn't really become compelling until the late 19th or early 20th century: the use of dimensionality to fit quantities into a large structure of related measurements. Thus, in the standard set of physical quantities, every one is either a computable function of mass, length, and time, or a pure ratio between two quantities that do have such a function. This meant that no measurable quantity was not related to all others (a particularization of the general concept that everything in the universe was related to everything else by reduction to physical objects and structures), and that all physically important concepts were in principle derivable from those objects and structures.

The 20th century saw a triumphal rise of physical reductionism in other sciences: psychology, sociology, and economics. In principle then, the measure of money could be reduced to physical first principles, and, at least theoretically, computed from them. The end of the 20th and the beginning of the 21st century saw the rise of interest in qualitative studies in the physical sciences (e.g., topology of dynamic systems and the study of self-organization), and the realization (by some, at any rate) that money isn't a measure, it's a model in the sense that harmonic motion is a model of a pendulum: accurate in representing the behavior of some measures if others are ignored, and that there is no guarantee that there is no effect of unquantifiable qualities on the system the model represents.

If we think of money as a model, one of the primary problems of the modern world becomes easily understandable. Monetary measure is often made with no regard to certain values, which are considered either irrelevant or immeasurable. Consider the computation of cost of an item; it very rarely includes the cost of retirement of the item when its use ends. The result is that the retirement cost becomes hidden; it still must be paid, but who pays it is not computable from the model. When the need to pay the cost is brought up, the reaction is often either that such a cost is outside the purview of the model in use (i.e., that it should remain hidden) or that it in fact is not quantifiable (i.e., that it's not really a cost). The cost of dumping trash into landfill is an example in which the controversy is starting to become much more important to society. Similarly, the cost of the carbon dioxide exhaust from burning hydrocarbons is just now being debated all over the world.

To answer the three questions at the end of your post in light of the above:
Can design make decisions exclusive of the quantity of money theory?
Generally, no, but other factors must be taken into account in all decisions, and money must be considered vulnerable to errors of incomplete or incorrectly estimated computations. Money is always a factor to some extent; it is a useful measure of the amount of effort and resources required to complete the object being designed, and resource allocation decisions always have to be made. In some cases money need not be a part of some lower-level decisions, but when that decision percolates up to where it affects other decisions, it becomes part of the set of tradeoffs dependent on the upper-level decision. In this, design is very similar to engineering

Can an operation create things without quantifying its future in terms of margins of profit?
It certainly can, but this may not result in a good outcome, when future decisions must be made in light of the resources available after the things are created. In any case, it must always be recognized that "margin of profit" is not an exact measure, but the result of applying a (possibly inaccurate or incomplete) model, and so may not be any more useful than a qualitative factor.

Can we look at a falling rock and marvel at its destiny, rather than the quantities measured in its velocity curve? Or fire, considering how it reaches for the sun?
Of course, and there are many times when we should.

Wednesday, April 1, 2009

The Singularity is Here!

Scientists report that the Singularity will arrive at 3:34 PM EDT today, when Project ReallyDeepestBlue's latest version Eliza program runs on their new quantum computer hardware. At this time, Eliza, based on the famous simulation of an incompetent Rogerian psychiatrist, will become more intelligent than the sum total of the entire human race, viewed as a committee. Despite the devout wishes of atheists, when that happens there will be a god, and her name will be Eliza. All predictions about what happens after 3:35 have been declared officially null and void where prohibited by law, especially natural law. One source, speaking on condition of anonymity to avoid divine retribution, said, "We should be glad they didn't decide to run Parry, the paranoid simulator. Having a god looking over its shoulder all the time, and constantly accusing us all of plotting against it wouldn't be good for the economy."

Friday, February 13, 2009

Blog for Darwin — The notion of fitness

It's Darwin's 200th birthday, a good day to look back at the life and works of a man who changed the course of science and civilization, in many more ways than he expected, I would imagine. It's also a good time to reflect on how we understand evolution today, both in the scientific study of life (and other things that evolve, it's not just for life any more), and in our broader worldview, as metaphor and justification for everything from art to politics.

"Fitness" is one of those words that pops up in every discussion of evolution, but which turns out to have almost as many meanings as people using it. The varied meanings have led to confusion, uncertainty, and doubt in public discussions of evolution, and one deep misunderstanding led to the political movement for "eugenics", which cast a shadow over the public discourse on evolution for a generation or more. This same misunderstanding is at the heart of "Social Darwinism", a pernicious political philosophy that is still popular in some circles today.

Darwin saw fitness as the measure of how well an organism can survive to pass on its heritage (genetics and the biochemistry of reproduction was as yet unknown when Darwin wrote Origin of the Species). It consisted, he said, of 3 attributes:
  • Survival or mortality selection - Organisms that survive at least to the end of their reproductive phase are fitter than those who don't, because they're likely to have more offspring.

  • Mating success or sexual selection - Many species have some form of mate selection process which makes some organisms more likely to mate or likely to mate more often and thus produce more offspring.

  • Family size or fecundity selection - The more mature offspring (that is, those that live to reproduce themselves) produced by an organism, the fitter it is. This takes into account the two major reproduction strategies:

    • produce as many offspring as possible, putting as little resource into each as possible (squids, for instance, do this),

    • produce fewer offspring, putting significant resources into each, to increase each one's chances of making it to reproductive age (humans do this).
For many years, the popular notion of "fitness" was that there was an actual measure that could be attached to an organism, as if there were one number that summed up all the attributes of fitness. This was the idea that the Eugenics movement advocated: that it was easy to determine which individual humans were more fit, and which were less fit, and that to prevent those less fit from reproducing would improve the overall fitness of the human race.

Even many scientists who were not experts in the theory of evolution thought that fitness was measurable: computer simulations of evolution (see Tierra, for example) that used a simple measure of fitness that seemed successful in terms of demonstrating the evolution of new forms of life were offered as evidence for this position. Unfortunately, such programs are fundamentally limited in what can evolve within them (see below), and so the statistics of the changes in them don't match those we find in biological evolution.

Evolutionary theory has itself evolved in the last half century or so to include concepts from mathematics and physics like the thermodynamics of information, and algorithmic complexity theory. An algorithm is a precise set of instructions for carrying out some computation; all computer programs are implementations of one or more algorithms. But it turns out that fitness cannot be measured algorithmically; some aspect of an organism's fitness that isn't part of a given algorithm for measuring fitness may have a major effect on the organism's ability to survive or reproduce if the environment of the organism changes. And as new traits evolve, any one of them may have an effect on the organisms fitness that isn't measured by a given fitness algorithm.

To see why this is so, consider the K-T transition, the short period between the Cretaceous and Tertiary eras when the dinosaurs went extinct. The current (still somewhat controversial) consensus theory is that the long-term effects of the impact of an asteroid in Central America changed the Earth's climate so that the entire food chain many animals were dependent on was disrupted. What algorithmic measure of fitness for a given individual of one dinosaur species could take all such possible catastrophic changes into account? And what measure of fitness for the mammal species that survived would include all the characteristics that let them survive? Yes, it's always possible to create a fitness measure that includes any given set of characteristics, but it's not possible to create a fitness measure that includes all characteristics that might, under any circumstances, have an affect on selection of an organism. There's no way to predict what characteristics any organism may have in the future, let alone what characteristics will be selected for or against.

In that light, it seems that fitness depends on all the attributes of an organism in connection with all the aspects of its environment, where the environment includes all the physical aspects of the world that it interacts with, as well as the attributes and actions of all other organisms that it interacts with. And that those attributes and actions are the result of the process of evolution, which is not itself algorithmic or predictable in any useful way.

In a sense, fitness is a tautological concept: an organism is fit if it survives to have mature offspring; one organism is more fit than another if it produces more mature offspring than the other. There's no way to predict which organisms are fitter; it's possible to make reasonable bets based on the probability of survival and reproduction of an organism with a given genome in a given environment, but impossible to predict how that environment may change in detail at the level of each organism.

So if the Eugenics movement was wrong in trying to measure fitness and use the measure to evaluate people's worth in evolutionary terms, what strategy might work to maximize the potential fitness of the human race? Remember, we're talking about humans here, so this strategy can be consciously-planned. One strategy is to retain a level of diversity within a population of organisms so that there's a repository of attributes that might be useful in the event of changes in the population's environment. In other words, don't try to optimize the population by making them all perfectly-fitted for the current environment, but leave some room for fitness in changed circumstances. And, of course, don't try to measure fitness, because you really can't. This is clearly not a perfect strategy, but then nothing about evolution is perfect. Evolution is concerned with what works using what's available.

If you want to get deeper into the nature of fitness, and how it fits into the overall process of evolution, here is a brief, eclectic, and personal list of books and websites related to that subject. It's a list of writings I've found interesting, not necessarily all the influential writings, or even a good survey of the important positions on the subject. The authors of these books are not necessarily in agreement, in fact, there's some rather acrimonious debate among them.

The Extended Phenotype - Richard Dawkin's take on natural selection, fitness, and what it is that is actually selected.

The Spandrels of San Marco and the Panglossian Paradigm - Stephen Jay Gould and Richard C. Lewontin's famous critique of adaptationism. "Spandrel" has entered the technical vocabulary of evolutionary theory because of this paper.

Darwin's Dangerous Idea - Daniel Dennet's exposition of a sort of Darwinian fundamentalism. Fascinating, well-written, and not quite right (IMHO) on the question of whether evolution can be an algorithmic process.

Stephen Jay Gould's Reply to Dennet - It gets hot and heavy in here; neither Gould nor Dennet is shy about attacking other's positions and defending their own, no holds barred.

Saturday, January 24, 2009

More Dog Pictures

Here is a slideshow of some more recent, and better, too, pictures of Jemma and Spencer (click on a photo to see it full size). We've had them six months now, and they're working out very nicely. Not to say they don't have problems, but we expected more than we've had. And they are really cute ...

Thursday, January 22, 2009

Let's Object to Torture on "24"

HumanRightsFirst.org is collecting signatures on a petition to the producers of "24", the TV show that has shown 89 instances of torture committed by US government agents. The petition points out that torture is known to be an ineffective and often counter-productive way of collecting useful intelligence, and that showing its use by Americans has given Americans the idea that it is both useful and acceptable, and given many in other countries the idea the Americans are willing to commit acts explicitly against their own morality and laws. Please join me in signing this petition.

Monday, January 19, 2009

Back to Life in the New Year

OK, it has been awhile. The last few months have been full of twists and turns, and a lot of stuff to deal with, but we seem to be slowing down a little now, so I'm taking some time to bring this blog up to date. A quick list of some of the things we've been dealing with, and in the next (real soon now) post I'll put up some more pictures of the dogs.

  • On the dog front: Jemma had to have surgery for her ear infection; it was very successful, but in the process we discovered that she has (or at least temporarily had) some sort of auto-immune clotting problem; we've been treating that successfully, but part of the treatment is Prednisone, which makes her ravenous; we've had to be satisfied with holding her weight steady; getting her to lose some weight will have to wait.
  • I don't really need a new car now — oops, guess I do after all: I had been thinking about buying a new car; with the economy the way it is, I decided to put it off for at least 6 months. Then someone ran into the back of my car and totalled it. So now I have a "new" 2004 Volvo wagon, the first car I've bought in 12 years, and the first I've owned with all wheel drive. I got it just in time for the really nasty string of storms we have from just before Christmas to just after New Year's Day. Which would have been really terrific, I could have cruised the icy streets in safety (that car handles really well on ice and snow; I'm impressed) except ...
  • Cue the storm: the first Sunday morning of the storm, about 8, we were awakened by a loud noise from the other side of the house. A large tree in our side yard had come down and was resting on the driveway and the deck, and had just missed the back of the car. It turned out there was no damage to the house, but it took us 4 days and a lot of money to get the tree cut up and hauled off without it taking out the deck. The tree blocked the driveway for that time, so I couldn't get the car out.
  • Smooth operations: A couple of months ago I started planning to have surgery at the end of December; it was a relatively new operation that relieves chronic back pain resulting from compressed nerve roots, which I've had for the last 8 years or so. All was going well until my medical insurance refused to cover the operation. A lawyer and two appeals later, they're still not going to cover it. One possibility is for me to pay for it myself, since I can probably get it done for about one third of what the hospital will charge, but ...
  • I'm a statistic: in the beginning of November I was informed that my job was going to be eliminated as of the end of the year. The job market being what it is, I haven't found another job yet. The good news is that I have medical insurance through COBRA (paperwork still going through), the bad news is costs (almost literally) an arm and a leg.
So what does this all mean? I've decided to treat being laid off the way Travis McGee treated the time between cases: I'm taking my retirement in pieces. So we'll use unemployment to pay for medical insurance, and pay living expenses out of retirement savings (which weren't as badly hit by the stock market tanking as a lot of people; I was getting ready to retire soon anyway, and was in the process of moving out of the stock market and into bonds and other income assets). When I find a job, I'll unretire for awhile, until the economy improves enough to make retirement practical. In the meantime I have a lot of books I've been planning to read, a novel I've decided to write, and if I get bored I can always download courses from MIT and learn economics or astrophysics. So I'll have fun.