So You Want to be Secure?
Security is about a lot more than keeping the bad guys out a lot less. We shouldn't confuse stasis with security.
11-Aug-2002Version 2: 2023-03-27 08:43:16

Note: I am trying to find a style that is a blend of first-principles style which tends to be abstract and a story-based style which has the danger of "proof by example". The result is a bit meandering but I hope it is effective building from familiar ideas and generalizing. My wont is to decompose problems into their elements but there is a value in creating interconnections so as to relate the readers' experiences. The result is a bit meandering but in a way that should be engaging.

Computer security is a big concern. Even without the current anxieties if you ask people if they want to be secure the answer the obvious answer is yes. Or, YES! We didn't suddenly discover security after September 11th (2001 in case anyone doesn't recognize the new idiom). Most of us (and those with something worth protecting) have some form of insurance. It allows us to share risk and have some confidence that whatever befalls us we will be able to move on.

We buy insurance because we recognize that no matter how careful we are we can't protect against all eventualities. For small every day risks we can be self-insured but for larger risks, such as losing a house to a fire we limit our exposure by sharing the risks with others in the form of insurance.

Insurance also enables us to be adventurous. Not foolish but adventurous. After all, insurance works if the probability of harm is low but the cost is high. If both are high then the cost of insurance would have to be even higher. But I don't want to bore you with the mathematics of actuarial calculations and philosophy of insurance.

Insurance is just a way we come to terms with the understanding that we can't prepare for all eventualities. Instead of building our homes with just fireproof materials and using asbestos for our fabrics, we allow ourselves the freedom to live a little since the real risks for each of us are small. While we can do a lot to reduce our risks, the idea that we can rely only on preventative measures is foolish at best. And at worst we'll rely on asbestos to protect us. Actually it could be worse--imagine if asbestos had been deemed the one right way for fireproofing and it was mandated. In fact, to some degree that did happen and years later we discovered how deadly asbestos could be. But that is a topic in its own right.

More troubling is the word "security" itself since we use the same word for a military model of security and for the financial model and many other notations that overlap. That's the power of English -- we can carry on conversations without having to bind the terms to any specifics.

Just like we choose the kind of insurance that is appropriate for our own needs we need to be able to exercise our own choice (or description) when it comes to computer security. Even the term "computer security" seems strange. After all, I don't care very much about my computer being secure if I'm not.

David's essay on scapegoating described the consequences of assuming that there is something called security and that it comes in one size. After all, if it is well-defined and the experts in the government know the answer then there is no need to bother you with the details. In fact, giving you powerful tools like cryptography will only cause harm since you might use them to effect free speech by not having to make it available for inspection by those who are charged with protecting you.

At this point it's obvious that I'm heading into cyber/crypto-anarchy and card-carrying ACLU territory. Maybe.

Maybe not. The real issue is economy. The economy, and these days it's a world economy, is the context in which we find our security. After all, insurance doesn't replace what we lost but it can give us some financial security. Rather than thinking of money think about the opportunity and freedom it affords.

Free speech is about economics and the ability to innovate while the particular innovation is still viewed as disruptive and threatening. Virginia Postrel explores this in more detail in The Future and Its Enemies.

And to this complex mix we add computers. In the 1960's we used the phrase "Giant Brains" to refer to those fearsome behemoths. By today's standards those were mere toys. But there is still a mystique that leads us to project our fears onto them.

Y2K was one such debacle. Sure some programs needed to be updated and fear is a powerful motivator but society was not so brittle that one or a million failing programs would destroy civilization.

The reason is simple, only resilient design works. We must assume that others' systems fail and thus guard against it. And, as I've pointed out many times, the reason the web works is that the Internet is unreliable and, I'll add, insecure. The value of the Web forced even the most (well, not the most, they are still cowering in a disconnected hovel) fearful to connect to the world. The benefits of doing so are far too great to ignore and the early browsers provided only cautious access and a degree of safety.

It's only a first step. We've extended the browser to be more capable but at the price of some degree of exposure. Even as we read about gazillions of dollars in damage caused by these strange viruses lurking inside of the computers we continue to use these systems and we learn to be a bit more cautious. Newbies learn that not all they read is true though even the most sophisticated get caught.

Computers do change the rules of the game. What was formally secure inside a locked basement is now exposed to the world. It's exposed because we discovered that the information had no value hidden away and we get a great deal of benefit from making it available. Just as we soon learn the folly of leaving our valuables unguarded in the middle of a crowded room; we learn that we learn to take reasonable measures in managing our information and our exposure.

We can't guard against every eventuality so we learn how to take reasonable risks and how to protect ourselves after things go wrong as they inevitably will. Insurance is a good model as long as we can reduce our exposure to money. A better concept is "resilience" -- like the tree that bends in the wind instead of snapping.

A resilient system keeps on ticking (apologies to Timex). More important is that once we have confidence that the system will survive we are free to experiment and create new economic value. And that's the real importance.

Simply trying to close our systems off to protect them from the enemies is defeat. First it presumes we know who the enemies are and that we can distinguish between disruptive innovation, accidentally destructive innovation and malicious intent. Even if we could, if the price is simply to preserve what we have lost because the economy is dynamic and preservation is nothing but stagnation.

Even without the emphasis on security we focus on making computers easier to use. What is missing from that sentence is "for old applications". What we need is the ability to find new ways to use computers or, better, computing. The hardware itself is only an artifact and the concepts are far more important. The Internet has given us a lesson in how to benefit from innovation at the edges. The particular implementation is a minor issue. As I pointed out in Edge Protocol V6, we don't need to rebuild the Internet to innovate, we can just build around it.

As an aside, attempts to make computers smarter than us only compound the problems since it only embeds conventional wisdom more deeply and enfeebles us. I plan to write more about this topic in its own right. (or write?)

Just putting fences around the periphery doesn't even get us security. Users (AKA) people will still have to cope with threats like Gator. By confusing .COM with the Internet we have created new exposures. A simple typo brings you to sites that are simply marketing traps. Then companies like Gator try to get you to install themselves as viruses on your machine. Gator itself is relatively benign since they just want to harvest you for marketing purposes. But how would you know. And how do you know to not click the "start button", press "run" and then type "deltree /y \" (kids, don't do this at home, nor at the office!). To be fair to Gator, they do offer to find things people want even if I don't like their intrusion and they are sort of above-board about it. But I don't like accidentally running across their installer since it is easy to click yes thinking it is yet another one of the certificates you do want to accept.

We need a literate society with people who understand what computing is and the tools to create what we don't even know will be important. And we need to tease apart assumptions we don't even recognize. For example, we confuse security with authentication.

As David pointed out, thirty years ago we aren't supposed to worry our pretty little heads with this security stuff. After all the government would do it for us. The idea that we might want to protect ourselves against our neighbors was considered too minor for these august agents of our freedom (which comes at the price of freedom). It's as if we weren't allowed to lock our doors in case the military needed access.

We've lost thirty years of learning how to cope with threats. We don't want to spend the next thirty cowering when we could be contributing.

For a more detailed perspective on the limitations of computer security I recommend reading Homeland Insecurity in The Atlantic Monthly.