Sunday, June 21, 2009

The mystery of N10TM

So, I'm out driving to get groceries here in the south-end of Grand Forks, ND, and down the avenue comes this heavy pickup truck with a kingpin thingy to haul this semi-sized flatbed trailer. What's on the trailer immediately gets my attention, I mean, you don't just see stuff like this every day. It's a wrecked airplane! And by the looks, it had been a really nice medium GA airplane.

I see the trailer turn into the parking lot of the Super One strip mall area. It appears the driver's going for a break or pizza at the Pizza Hut or something. I cross over the lanes and follow him into the parking lot and then come to a standoff distance alongside, gawking over the awesomeness of the man's cargo.

He jumps out of the heavy pickup, and heads for the Hut. I want a picture, but I don't have a camera on me, but it appears I'll have some time, so I complete my grocery shopping and return 30 minutes later with a camera.

I arrive just in time to meet the driver as he's emerging from the Hut. He proceeds to check the straps securing the load of wrecked airplane: mangled engine pods forward, mostly intact fuselage in the middle, and wings, bit of gear, and empennage at the back of the trailer.

I quickly go for my camera and approach, snapping a few shots on what's about the end of my memory card (still nearly full after a wedding shoot).

The driver notes my interest and I engage him with questions. He's hauling salvage. This airplane wrecked in 2007. Ran out of fuel and crashed into a truck in a parking lot almost exactly like the one we were in now. I didn't recognize the model immediately. The driver tells me it's a King Air, and I mentally note from the three rows of cabin windows that it must be a C90 King Air. I note the tail number. Since it wrecked in 2007, the NTSB probably has an accident report up on it by now.

The driver shows me pictures of the accident scene. I note to him how the cabin is squished a bit under the nose, but is otherwise fully intact and ask if there were serious injuries. "Yes," he replied, "pilot and three passengers got thrown around pretty bad."

"He ran 'er out of fuel," the driver explained.

"Wow, do you know how high he was when he ran out, I mean, was he on approach to an airport?", I asked.

"Oh God no."

I gave a grimace in acknowledgment. The driver offered all he knew, which was that he appeared to run out of fuel at a bad moment, and didn't seem to have the height necessary to execute any better a forced landing, like on an airport or away from people and cars and buildings. Considering this, some unoccupied wrecked cars and no fatalities seemed to be a decent outcome to a bad situation.

He explains that the stalling speed of this airplane is something like 90 mph, so coming into the parking lot would be like wrecking your car at full interstate speed.

He's preparing to depart, so I quickly walk around a shoot a couple more pictures, filling the memory card and hoping for the best, and then thank him and allow him to be on his way. This was a privilege. The airplane crashed in Chattanooga, TN and sat around there, and now was sold for scrap and though I didn't catch the final destination, I got a little insight into the life cycle of downed airplanes.

So, curious to know more about the circumstances, once home again I plug the tail number into the NTSB's database and get the accident report.

If you're reading this, skim the report and come back. Okay? Now it felt to me that the NTSB didn't care all that much about this incident, doing a phone interview of the pilot. It doesn't appear anyone else was too much involved. An FAA inspector confirmed the absence of fuel in the wings, but the pilot's story begs some questions in my mind:

He reported gauge readings between FULL and 3/4 and estimated by this he had fuel for 3 hours at least, more than enough for the 1h20m flight. Now...I don't know the preflight procedures for this type of airplane, but I do know that aviators generally regard fuel quantity gauges in GA airplanes to be liars. I guess certification standards are such that they must read accurately when the fuel tanks are full, and when they are empty, but the middle indications that come during operation may mean only that the tanks are neither full, nor empty, but by how much...? Are you timing your flight? Do you have any sort of totalizer measuring fuel burned?

Well, I was almost right on my guess as to aircraft model. It was a B90, the late 1960s forerunner of the C90 which I had guessed. The changes amounted mainly to perhaps a slightly buffed engine model and longer wingspan, so I mostly nailed it.

Pulling some B90 performance specs off the internet, I see that in cruise the airplane ought to burn about 64 gallons/hour. Now if the gauge indications are anything like my car's, when the gauge reads 3/4, the actual level is more like 1/2. And having half-full tanks in this airplane at that cruise burn rate would yield something close to 3 hours cruising time. So the pilot's estimate of flying time available seems to check here.

To my total surprise, still had the accident flight's history in its database! Humorously, it listed the destination as Chattanooga (it was really Georgetown, KY), and that the airplane had "arrived." Yeah, I'll say that's true. One way or another, they always arrive. This data features prominently later.

According to the report, the airplane had reached its cruise altitude of FL210 when the pilot noticed that two of the four gauges suddenly read practically empty. Reassessing his situation, he reported to the NTSB he estimated having about 50 gallons of fuel aboard at this point, and opted to make a diversion to Chattanooga. emergency. It's quite strange that the left side gauges went from nearly FULL to empty in just 22 minutes, but...maybe there's some sort of electrical fault with the gauges. Who knows?

Do I dare to call shenanigans on the pilot, and on the NTSB for not making this clear in its report? Not being a real-world pilot myself, just an enthusiast (for now?), I'm about to get pretty presumptuous. But, this is the internet after all.

Again, according to the data on the B90 from the internet, in cruise power the airplane will burn approximately 64 gallons per hour. So that means with 50 gallons estimated remaining he ought to be able to continue up there in cruise for another 45 minutes at about 200 knots.

According to the report, it's with this estimate in mind that our dear pilot elects to prudently divert to Chattanooga, about 45 nautical miles away. If he stays at altitude and cruise power, he'll get there inside of 14 minutes, leaving 30 minutes to descend and execute an approach (at cruise power, which he wouldn't use of course, so he may have even more absolute reserve).

But instead, he's out of gas and crash-landing on some guy's pickup in the middle of a strip mall parking lot! How could this be?

Well now I turn to the Flightaware data for N10TM on the 19SEP07 incident flight. Flightaware gets its data on aircraft position from the same data network air traffic controllers use to monitor the skies. Radar sites get controllers the raw data. Their terminals process it for their needs. After that it goes into a network to which other entities may acquire special access for fleet monitoring, ground service planning, traveller updating, etc.

From this data, I noted that the airplane never reached its cruising altitude, and entered a turn toward Chattanooga near the apex of its flight, around 19600 feet. From this point the flight proceeded more or less direct to the field in a continuous descent. The descent rate and airspeed appeared to be not always stabilized, but averaged 1300 feet/minute at 180 knots groundspeed. That's enough height and speed to go 45 nm, and the straight line distance between reported radar points was in fact 45.6 nm. The field was about .6 miles further along.

It appears from internet sources that a best glide speed isn't published in the POH for the B90, but one source inferred one from data published for a C90 and listed it as about 125 knots. I don't know what sort of descent rate that would translate into, but 1300 fpm doesn't seem out of the ballpark to me.

Now in his report to investigators, he'd estimated having 50 gallons aboard about the time he noticed the two empty gauges and elected to divert. In descent, the power is normally pulled back somewhat, in some cases (certainly for jets, but maybe less so for turboprops like this) all the way to idle. This allows essentially a gliding descent and initial approach, making up for the excess fuel used on the climb to altitude by now using very little on the descent back down, certainly much less than that used on cruise. So the picture should still be okay.

Somewhere in this descent he reported he ran out, and too late to do anything but strap down tight and pancake on the nice pickup truck, like it was a last minute happenstance. I don't buy it. If his estimate had been right, he ought to have ample fuel to make a normal approach and landing. Maybe even enough to afford one go-around if he messed up flying his approach path. How could this be?

In my view, simple: he's shading the truth to investigators. He doesn't appear to have made it to cruising altitude at all (about 1400 feet under it). At the time he decided to divert he was probably already out of gas or very close to it. He didn't methodically consider and then execute a diversion while still carrying at least some reserve of fuel. He hastily and with OK, but somewhat less than perfect form beat a hasty retreat to the nearest suitable airport that came to his mind.

He turned direct from his climb heading, to the airport at Chattanooga, and didn't even try to line up for an approach to the closest runway end. He appeared to be trying to make a B-line for the field and I think he hoped to kick the airplane 'round at the last second to line up and touchdown on the runway there.

Only, he didn't even make it that far. His groundspeed and descent rate suggest a path that might not be unreasonable to expect from a gliding aircraft of this type. Surely if he were still powered, as he suggests when he elected to divert, I would expect he'd want to keep his altitude until he was certain of making his diversion airport. This would mean a delayed descent by some amount.

Even in normal circumstances, one wouldn't choose to make a continuous descent from the point he had, as obviously it's still too far out, and the data doesn't suggest any level flying segment. I think under normal circumstances one might plan to be in the airport vicinity at around 3000 feet, so as to have some flexibility to set up a normal approach.

In my armchair cockpit, I think I'd keep at cruise altitude to benefit from the fact that my fuel burn would be more efficient up there. I'd start down only if an approach and landing was assured, and for utmost margin, I might even fly until overhead the field at cruise altitude, and then enter a descending holding pattern above the field and inbound to a holding fix lined up with the landing runway. You can be certain of gliding in, in that situation.

None of this happened, and I believe that's because he'd already lost power. And while the outcome was a fair one for he and his passengers, he got lucky that no one was injured or killed on the ground. The track data had him near a golf course just before the parking lot. That might have been a safer forced landing site. I think he was fixated on just trying desperately to make it to that field.

The NTSB might have been wise to this pilot though. They didn't seem to care about the obvious possibility that there might have been a gauge problem when the gauges seem to be showing close to FULL, or maybe some sort of fuel leak. A conservative assumption of 1/2 full tanks at this stage would, as the pilot mentions, rightly given him at least 3 hours of cruise flight. Yet only 22 minutes after takeoff, he's noticed two gauges reading about zero (it's not reported what the other gauges read). And fifteen minutes and 45.6 nm later, he's glided that bird to the deck. I'd say his gauges were all probably reading closer to empty all along.

Friday, June 19, 2009

Using ATA over Ethernet saves the day (and data)!

Hey... a non-political post for once!

Welp, this week I got an unrequested opportunity to put my PC technologist hat back on and fikits my laptop, whose hard drive had gone nearly tango-uniform.

I flexed my dusty skills and, after much mafipulation, managed to extract the win! Dead 40GB drive out (hey, it's like a 2002 model!), new inexpensive 160GB drive in, all data salvaged, unit operating like nothing ever happened.

My wife and I share the laptop and use it considerably. She noted earlier in the week that it was making strange noises. I didn't hear them, but ran SpinRite across the disk to verify its integrity. That passed and no problems were noted. The hard drive is OEM, and my rule of thumb is that one ought to expect a HDD failure in a laptop at least once in a 5-year ownership period. They have to take lots of mech-destroying bumps and jolts, and so are more at risk than a well isolated desktop unit. I've never owned a laptop that made it from purchase to trash without a drive dying.

A few days later, the most horrible noises did begin to emit from the unit. Not the clicks of head-resets, not the ray-gun of stuck platters or stuck head armature. This was a lower pitch metallic growl which sounded almost exactly like a large sleeve-bearing case fan whose aforementioned sleeve bearing has become contaminated and is now loose and buzzing.

During these growls, the drive couldn't move any data, and Windows halted. The situation was serious. I powered down immediately, hoping to preserve a chance to salvage the data, and resigned to procuring a new drive.

We had backups, mine were up-to-date, Jane's were weeks old. Just the same, I was hoping not to have to play endless hours of setup reloading Windows and Ubuntu and getting all the apps reconfigured and so-on.

I had no access to a 2.5" IDE drive adapter cable for a desktop or USB enclosure. So instead, with a cold drive, the plan was to boot the laptop with the failing drive to Knoppix and connect it to my network. Then, I would also boot my wife's Windows desktop PC with Knoppix and use it as a repository for rescued data (all my big drives were too full!).

I cleverly hit upon ATA over Ethernet as an expedient means of accessing the faulty drive and moving its content to an image file on the desktop PC.

The process for setting up ATA over Ethernet was as follows:

Both PCs:

Verify AoE support in the kernel

grep ATA_OVER /boot/config-`uname -r`

A result denotes support. If you've ever configured/compiled your own kernel, this bit seems self-explanatory.

Assuming support's been modularized, insert the module

modprobe aoe

Target PC (the PC whose block device you want to make available on the ethernet):

Install the vblade EtherDrive emulator. This software exports any local block device, partition, RAID, LVM volume, or even flat image file, as a network block device.

aptitude update

aptitude install vblade

Then export the chosen block device to the LAN

vbladed 0 1 eth0 /dev/hda

In this case, the above line exports my failing hard drive to shelf 0, slot 1. This concept of shelf and slot numbers is how you differentiate various block devices that might be available on the same LAN. Most PCs have only one connected ethernet interface, but if you have more, you can specify each one you want to make your device available on, and if your initiator PC has multiple interfaces to the same network, you'll get automagic channel ganging for faster throughput, if your device's native interface throughput is faster than a single interface.

Anyway, on the initiator PC (the PC which is going to access the remotely shared block device):

aptitude update
aptitude install aoetools

That should install the needed support software to attach an AoE-shared block device, and the status command should show you the device which was discovered and made available to you, in my case e0.1.

The system's already attached to the drive at this point. There's no security or per-machine restrictions. If there is more than one machine which may want to access the device, I think you'll have to manually coordinate access. While I think many could simultaneously access a device for reading, mounting a filesystem shared this way read-write on more than one initiator is going to result in damage to the target's data structures.

Everything that happens next is from the initiator PC. I used GNU ddrescue to image off the failing drive in a manner that would allow me to backup and retry unreadable areas if necessary.

ddrescue /dev/etherd/e0.1 fail_drive.img /home/knoppix/transfer.log

Specifying the transfer logfile allows ddrescue to record its progress, skip over errors, resume if interrupted, and using extra options, go back and retry previously failed areas.

Ddrescue is a super handy tool. If SpinRite's dynastat sector reading abilities were available to this tool, it would be perfect (sidebar: I consider it a big failing that SpinRite has no convenient means of redirecting successful sector reads to another device or imagefile. Often, when dealing with a failing drive, you next good read is likely to be your last, and the mechanism may be continuing to degrade around you. In such situations, the less you have to touch the source drive, the better. You want to transport Scotty's pattern out of the damaged transporter buffer and into a fresh one you've got waiting. You don't really want to just reconstitute the degraded pattern inside the same degrading transporter. What if after all it's many hours of beautiful dynastat recovered and reallocated sectors, the mechanism's occasional clicks become a permanent repeating clank, rendering the drive inaccessible? Steve, think about this for SpinRite 7. Consider the wealth of block device connection options a live Knoppix environment gives us. That's the rich platform for attaching revival storage, and spinrite's internals would be the muscle.)

Well...this tool ran for about 90 minutes, and the drive occasionally emitted those painful shrieking buzzsaw sounds, but eventually ALL the data was salvaged. Hooray!

The second act was to be far more thorny. The original drive was 40GB and held a majority Windows partition with system volume Truecrypt encryption (to protect private personal data in the event the laptop was ever stolen), a minority Ubuntu partition, and a few hundred MB in a partition at the end as linux swap.

To be sure that was getting quite cramped, and a happy side effect of the drive replacement was the prospect of much more space to work with. But, how to transplant these bootable partitions onto the new drive in such a was as to ensure they would remain usable (in the case of the Truecrypted Windows system volume), bootable, and enlargeable.

This turned out to be a more formidable task than I'd originally bargained for. I think I won't go into fail details, but simply report that in the end, after much experimentation and head scratching over partitions that then failed to boot, I was finally totally successful. Here instead is the shorter story of the process which worked:

I began by using the previous AoE process to write back the 40GB drive image to the new drive.

In my prior failures, I learned that the best my BIOS was going to do was 32bit LBA, meaning that only 137GB of the 160GB drive capacity would be addressable by the BIOS (28 bits are used for the sector address, the rest for other junk). This had bearing in my repartition plan. Prior experimentation moving the Windows partition around left it unbootable for unknown reasons, I decided to leave it in place, but later would expand it. Ubuntu would exist below the 137GB barrier, to give Windows the most space, and prevent boot problems by way of a partition which straddled the barrier. To boot Ubuntu then, required a small boot partition which could load the kernel. Once running, the kernel could do 48bit LBA and so mounting the root partition past 137GB would be no problem. After expanding the Ubuntu partition some, the remaining space could be allocated to additional NTFS storage and Linux swap.

I started by noting down the sector layout of the partitions. Frak cylinders, sectors are just fine in LBA, so ignore cylinder boundary warnings when partitioning.

The structure was:
[30GB Win/Truecrypt][8GB Ubuntu][2GB swap][**unallocated**]

To move the partitions around, I needing something sector precise and unafraid of JFS or crypted garbage. This ruled out parted. It insists on knowing the underlying filesystem to do moves and resizes, but this isn't strictly necessary. I went manually.

I would define a temporary 4th partition at the sector positions I desired for my destination, then use ddrescue to copy the data from the current location to the new location. After test mounting the filesystem on the new temporary partition, I would note the sector locations, then delete that partition, and use #4 for setting up another new partition location.

In this way I first setup a 250MB boot partition spanning so as to end precisely on the last addressable sector inside the 137GB boundary. I next, used fake 4 to reposition the Ubuntu root partition to its new home starting after the boot partition. I next mounted both and transplanted /boot to the new boot partition, made arrangements in root to splice this partition back onto it's normal /boot location within the VFS once Ubuntu was booted (so automatic kernel updates would work), and then used the grub shell to install the grub bootloader into the superblock of the boot partition, pointed properly at the needed files in the partition.

Once satisfied, I could delete all the partitions, recreating them one-by-one in their new larger sizes.

Now the structure looked like this:

[136GB Win/Truecrypt (30GB filesystem)][.25GB ext2 BOOT]|137GB Boundary|[16GB Ubuntu root (8GB filesystem)][6GB extended -> [4GB NTFS][2GB Linux swap]]

At this point, I tested bootability and was pleased to find both OSes bootable by normal grub selection. The Truecrypt MBR password dialog to open up the Windows partition for use worked as normal.

Finally, to resize the filesystems to encompass their entire partitions. With the Truecrypted Windows volume, I thought orignally I'd have to decrypt, resize, and recrypt, but I cleverly worked around that inside Ubuntu.

I booted into Ubuntu and installed the ntfsprogs (for ntfsresize util). ntfsresize works on the filesystem inside a partition. To get it access, I used software by Jan Krueger which does the truecrypt authentication magic and passes the results onto dm-crypt, which can then handle the realtime blockwise encrypt/decrypt. The output is a mapped block device which gives decrypted data and accepts clear data for encryption, and works just like any normal, unencrypted block device.

Under normal circumstances, this is what I do to get access to my documents on the NTFS filesystem, as I store finance data there, which I use in Ubuntu on GNUCash.

I passed this device as the argument to ntfsresize, which WAS able to then resize the encyphered filesystem to span the whole partition! No laborious decryption necessary for the tool to work! Sweet!

Resizing the Ubuntu partition was far simpler, thanks to the "resize" remount option for jfs. I booted the Knoppix live-CD once more, mounted the Ubuntu root partition, then remounted it with that option specified. Leaving off any size specification to the option causes it to expand the underlying jfs filesystem to fit the whole partition. Unmount.


I now have a secure Windows partition with much more free space, a larger Ubuntu side, and some residual swap and unencrypted NTFS storage. Best of all, aside from the space benefits, it's like nothing bad ever happened to the lappy.

Finally, a tidbit if you ever need to very quickly create a large image file for the purpose of storing or preparing a filesystem.

The tradition is use of the trusty dd program, with /dev/zero as the source, but this results in a laborious and needless writing of zeros to the entire desired span and size of your diskimage file. The tip uses the sparse file writing capabilities of most modern filesystems.

dd if=/dev/zero of=your-sparse-diskimage.bin bs=1 count=1 seek=1G

Note how the above command uses a single byte as a block, writes just one single-byte-block, and then seeks 1 gigablocks (1GB since a block = a byte, now) further out on the new diskimage file. This results in a file which is 1GB in size, but actually uses only 1 cluster (generally 4KB for most filesystems) on disk!

By seeking to the desired end, you cause the filesystem to generate a hole between that endpoint and the last written byte. The file appears to be the size of the full seek, but its size on disk in this cast would only be one byte. Cool! Of course once you put a filesystem into the image file, mount it, and start throwing data into it, its size on disk will grow to approach the reported size.

While I've posted this mainly for my own future reference, to any passers-by out there who happened to read it, I hope you found it of interest! Cheers!

Monday, June 15, 2009

Prudent Virtue

While reading, "What has Government done to Our Money?", by Murray Rothbard and available on, I was enchanted by Rothbard's use of a quotation, uncredited, "Liberty is the Mother, not the daughter, of Order." (see pg. 49)

I thought about this for several minutes, and decided that it was self-evidently true. I wanted to know what man was so brilliant as to have uttered this phrase, selected by Rothbard as I'm sure many others before him, to with great brevity capture the proper relationship of order to liberty.

How masterful. Government would have us believe that to enjoy liberty, we must first order our world. Government then offers insists to do this for us by its regulation and coercive power (often by appealing to our base sense of envy, and offering to use that power of coercion against some fashionable "them" to the popular benefit of poor "you"). But in so doing the order which results only confers more liberty to the state, not its people, by restricting the liberties of the people.

Some internet searching yielded an answer as to the identity of the originator of that quote, and also an expansion of its idea to yield a sort of family tree of the virtues which generate individual liberty. Foremost among them is prudence.

Today I offer that article as a source of inspiration to you, dear reader.

It is my hope that this humble blog helps convey some of these Libertarian ideas which your own curiosity impels you to explore further.

On the political talk circuit, folks are always calling the likes of Glenn Beck and Rush Limbaugh and Mark Levin and pleading with them about what possibly they could do that would make a difference in the present situation. It seems hopeless that mere individuals of limited means could have any power to frustrate the plans of enlarging state control. But, I think it must be far simpler than we imagine.

If we all practice prudence in our lives, relationships, and economizing, then prudent culture will naturally spring forth. And that culture will have the awareness and expectation necessary to naturally diminish state control and restore natural individual liberty. We know how to behave, and we know what's good. Thousands and millions of small, individually prudent actions in all facets of life naturally have the effect of restraining government authority.

Demand for the services of a coercive government will be reduced, because we assert our ability to govern ourselves, and by prudence extricate ourselves little by little, from our imprudent reliance on government, instead of ourselves, as a provider of all our needs.

Prudence begets Thrift. Thrift begets Liberty. Liberty begets Order.

Maybe I can suggest one more offspring to this tree of virtues: Order begets Peace.

If you hadn't already, do check out this article.

Thursday, June 11, 2009

It's Fascism we're moving to, not Socialism

For a progressive, subordinating your will to the will of the state is the second-noblest goal to which you can aspire. Becoming part of the thin circle of thoughtful, expert, progressive men or women of action, rightfully ready to represent the public, is the noblest goal.

That public's consent of your good governance doesn't matter. Your credentials attest to that. If the public could govern themselves there would not have been need for your ascendance. You may then assess matters and, representing the nation, dictate the best course of action. One that will produce to your highly educated and comprehensive judgement, the greatest good for the greatest number. Social justice.

Many of you will endure some pain. In the eyes of those people of just action who represent you, this is unavoidable and necessary to provide for those whom are judged more worthy. You wouldn't have been able to help yourselves anyway, for you haven't been ordained with the wisdom and intellect for the just application of power which those who represent you can claim. Subordinate your will to the state. The state can expertly judge your worth, and thus weighed, care for you as that worth deems appropriate. Thus, ensuring a fair and just distribution of limited resources to those who are most deserving, in the sole judgment of the progressive elite, and preventing inefficient waste on individual desires.

Critics of the policies and politics of the present administration, and opponents of progressivism more generally, often comment that we appear to be on the road to socialism or Marxism.

I beg to differ a bit. It's more unsettling to me than this. You might be simply looking around and seeing a diminished importance and role of private enterprise and free (unfettered) markets in our economy, and calling that socialism. Socialism as the opposite of a private, free market approach.

But we need to be more clear. While socialism prescribes a state-run economy, the state owns the factors of production, and dictates their use. All workers work for government owned and managed concerns, for the benefit of their fellows and themselves.

We (probably) won't ever have that in America, I believe. What we're getting now is a gradient into fascism. Fascism prescribes a state-run economy, as does socialism, but under fascism, the factors of production remain in private hands. The government doesn't own the factors of production per-se (Obama's said he doesn't want to be in the car business), but it does dictate to the owners how they will run their companies, invest their capital, market their products. Bureaucracies and departments will take in economic data, operate on it according to their desired objectives, and generate a compulsory program to be taken up by the owners of firms to direct production and attempt to satisfy the wants of the economy as those wants come to be expressed by the progressive policymakers (not by individual buyers).

It is and will remain General Motors after all, not Government Motors. But, it is now obliged to acquiesce to the will of the state and produce those sorts of cars which the state desires, in volumes the state determines appropriate, and pay the wages the state shall determine though its "pay czar".

This is done not to satisfy the desires of the consumer, but for the good of the state. Whatever the state's objectives might be (low-cost models for increased car ownership by the poor, or better environmental friendliness, perhaps even lower utility to encourage use of public transport alternatives, etc. and whatever).

Suppression of individual will and desires for the good of the state. "We have determined, it's for your own good." "It's for the good of the nation, by our decree." "You must buy a private health insurance package from one of these two government sanctioned private providers, because it's for your own good, and for the good of all Americans." That would not be socialism, that would be fascism.

In the end, I believe this is the destination of all progressive ideology, whether supporters of such ideas and policy understand it or not. One need only look to 20th century history. During the golden-era of progressivism, American progressives were supremely enamored of the action and efficiency which seemed to embody the fascist personalities and governments of the 20s, 30s, and 40s. To them, people like Mussolini were take-charge kinds of folks who imposed their will and got things done. Because such action was in their view in the best interests of the state. It wasn't a dirty word then, and many progressives openly expressed their support for fascists and lobbied for fascist-inspired policy here in America.

How is this so different from the Obama administration and its pantheon of extra-Constitutional czars? Or the Bush administration's Hank Paulson, crafting the TARP program and dictating its compulsory acceptance and program features to major banks?

The individual? Who cares about him? He must make is desires subservient to the needs of his nation. He must learn to give up a little for the good of his country, so that others may instead benefit. It's what JFK extolled. We must act to save the whole economy! It's for the good of the nation! (And by extension, you.)