|I started to write about my successes in using my home networking to connect all my devices but I soon realized that it was a tail of triumph over unnecessary.||05-Jan-2003|
I like to keep things simple. That's why I just plug new devices into my home
network whenever I can instead of running special wires such as printer cables
or USB cables. Any computer can get to any printer without being dependent upon
a particular computer and I am free to place the printers where I need them
rather than near a particular computer.
That's the ideal and, to some extent, I've actually gotten it to work. I
really do want to keep things simple but when my wife read a draft of this essay
she put a "HA!" after my first sentence because things
are not simple despite my attempts.
Unfortunately the concept of simplicity is very difficult to grasp. It is
"obvious" that as we add more possibilities systems must be become very complex
and the "solution" is to add some intelligence into these systems. In reality it
is this intelligence that is the real source of complexity by creating complex
linkages among the elements. Such systems are only simple as long as you use
them the way they were intended. This is the point behind David Isenberg's "Stupid
If you try to create new capabilities by mixing and matching elements on your
own then the implicit assumptions become impediments that one must work around.
These implicit assumptions create couplings -- if you try to change one part of
the system, you wreak havoc elsewhere. It's as if turning the volume knob on the
television changed the order of the channels. We find ourselves spending more
time and effort solving problems created by those trying to do us good than
- Firewalls started out as simple fixes for exposed PCs and now we find that
the Internet has been replaced by walled villages connected and then subverted
by Virtual Private Networks.
- We use the IP address as both a routing path and an address.
- The DNS is supposed to be a source of stable names but it must also act as
a directory service which means that the stability has given way to the whims
of the marketplace.
I started this essay as a paean to the simplicity of just plugging my devices
into my network but in writing it I came to better understand how difficult it
is to get people to stop doing me favors that I must then work around.
At the beginning it was a big challenge to make my printers available on the
network but now, with XP, IPP (Internet Printing Protocol) and other
improvements it is relatively simple and straightforward. To add my Epson 1270 I
just use the local URL http://epson1270/ipp.
For devices, like the Epson 1270, without a native Ethernet interface I use a
"printer server" device but thanks to improved protocols I no longer need to
install special drivers for the print servers.
I'm encouraged in seeing more IP-based devices becoming available. The
example, is only $200 and connects directly to the network. I use the
LinkSys WET11 to allow me to place the
camera anywhere and then bridge it via my 802.11 network.
Netgear, Hawking Technologies and others also have such products. I don't
want to review the particulars of each -- both their features and limitations.
At least I can fantasize that it is that easy. Unfortunately it has taken a
lot of work to achieve this degree of simplicity:
- The devices tend to make me work hard to assign IP addresses and names.
Often I have to run a special program or change my IP address so I can connect
to the device's default address. I shouldn't have to think about the IP
addresses. Instead they should all use a dynamic DNS or other convention to
post a stable name like PCs do. Out of the box the only stable name is the MAC
address but that can also be used to form a stable local name such as
"h00301b0772e3". Of course I should be able to add a simple local name such as
"epson1270". Right now I just edit the hosts file on my PC like in the
days before the DNS. This alone would remove much of the setup complexity.
As an aside, when I do something like using the MAC address as the basis for
the high level name there is the danger that people will assume that there is a
necessary relationship between the two uses and may build that dependency into
software. This is a normal result of people finding patterns. We shouldn't
expect people to look for caveats that aren't obvious. The solution is to make
sure that there are enough exceptions to the pattern to demonstrate that it is
not valid. This part of the larger problem of confusing work-arounds with
- 802.11 should and can be simple but the security and naming model is too
complicated to understand and maintaining all the keys in the right place in
the right form is asking too much. For my 802.11b stations I've chosen to just
use an explicit list of valid MAC addresses so the devices don't need to be
setup since each device has its own setup and it's difficult to manage all of
the settings. This is not quite as secure but most of my devices are still
wired. The real solution, of course, is end-to-end encryption, not
idiosyncratic link by link approaches.
- IPP is nice but I still have to tell the system what printer I am using
and which driver. I'm surprised that the printer protocols don't automatically
provide this information. The protocol provides for this but the pieces
haven't quite come together. I also need to guess whether the name is "ipp" or
"lp1" or something else.
The IP-based devices are still in the minority. My external disk drives are
still FireWire (IEEE-1394) and USB (2.0). It would be far simpler to just place
them on my Ethernet but the drives that use the Ethernet Interface become
"Network Attached Storage" and have a much higher price. Unlike the devices on
FireWire and USB the NAS provides are complete file system and not just disk
blocks. Both models are valid -- remote file systems and remove devices -- but
it's unfortunate that the choices are tied to the transport.
Given that Gigabit Ethernet is now becoming inexpensive there is no reason
for this. Putting devices on a network rather than a local bus does avoid some
of the access control issues. Even this assumption is problematic as FireWire
can be used for networking but the bus model doesn't address the issue of
ownership so the first PC that happens to see the drive gets possession.
Even without Gigabit Ethernet it makes a lot of sense to put devices on a
very inexpensive networking medium such as HomePNA or to use wireless
connections. As long as we can route IP packets the devices are part of the
This is frustrated by the assumption that each transport is special and thus
we put block devices on busses like FireWire and USB and attached storage
devices on Ethernet. There's actually another class of remote disk drives on
FireWire which are used as part of the consumer electronics industry's attempt
to share "content" but I don't expect these to have a large impact since that
industry is hobbling itself in trying to limit the user's ability to actually
use the devices.
One potential disadvantage of a network-based device is that you have to
setup a relationship between the device and the computer (or other devices) but
this shouldn't be very hard. In fact, it can simply appear on all of the
computers in the house just like a locally attached device. The actual policies
will depend on the device. A printer might be available to everyone in the home
while a disk drive that only offers access to raw bits may need to be owned by a
single computer which implements the file system.
I said "home network" and that's part of the problem. The notion that the
home network is not part of the Internet is the source of serious future
problems in making effective use of the Internet.
- It's nice that ATT Broadband offers me up to five IP addresses but that
doesn't even begin to provide for my needs with multiple computers, printers,
cameras, game boxes and, in the future, just about any device. The result is
that none of my devices have a public presence. The problem is compounded when
the lack of a public presence is touted as a safety feature -- the firewall.
It's like saying that starvation is good because it prevents obesity.
- The "router" (really a Network Address Translator) provides interior IP
addresses that are not visible on the public network. Any connection between
my devices and the outside world requires special handling. Some of the cases
are simple and common. Browsing is very simple but if I want to offer a
service I need to map port 80 to a port on an interior machine. I have the
option of opening up one system completely (the DMZ or demilitarized zone in
NAT parlance) but that means that an IP phone can't easily listen on another
port. The whole notion of a "port" in the Internet Protocol is a bad idea
because it creates an implicit relationship among a set of IP addresses. The
result is perverse complexity and limitations. It also begets additional
mechanisms such as UPnP (Universal Plug and Play) to punch holes through the
- Assuming that the home network is separate from the outside world makes it
easy to ignore the problems of how to specify who has what access to what.
Even without a house that is a serious problem. Any guest can connect to the
network and get full access by virtue of being physically present. Or, worse,
simply being nearby for the 802.11 networks.
- "Use simple file Sharing (Recommended)". This is part of the naivet�
inherent in seeing the home network as something apart from the rest of the
Internet but I'm singling it out because it is both dangerous and unnecessary.
It is a model of access control that subverts the mechanisms that are
available in XP and requires you to move files to special places in order to
give others access. You don't really say who has access, just whether others
can or cannot. The "others" are supposedly the members of your family but
that's true only if you are effectively disconnected from the rest of the
Internet. The original PC file sharing was all or nothing and port blocking
was implemented in order to remedy this naive design. Simple file system
brings this exposure back and then recommends it.
This is another one of the infections that festers within the confines of the
firewall; another example of confusing an accidental property--isolation--with
the more complex concept of security. It is difficult to dislodge when
mechanisms like "simple file sharing" implicitly depend upon this
misunderstanding. While I am a great fan of learning by doing there is a danger
in making implicit assumptions because one simply doesn't even know there are
other possibilities. This makes it difficult to move ahead by making it
difficult to fix past errors that are embedded within the fundamental design of
The real value of simply connecting everything via common IP medium is that
we are no longer bound by topology. I should be able to just connect to my Veo
Observer while traveling to see who is at my front door. In fact, I can make
that work just like this but it takes extra effort to work around each
impediment such as the NAT I need to use to multiplex my cable modem connection
and special access control mechanism in each device. Each victory then adds to
the fragile collection of elements which don't quite mesh and which fail
A first step is to recognize and reduce the implicit couplings. The Web is
successful because the systems are loosely coupled. If you use your own version
of HTML on your site others may not be able to view it but you don't make their
systems fail. Communities grow around common protocols.
In trying to meet the goal of connecting everything we need to be very
careful about building too much knowledge into the basic mechanisms. We are just
starting to experiment with the possibilities and these favors become limitation
and perverse surprises to work around:
- The DNS (Domain Naming System). Right now we couple the mechanism for
providing stable handles with a directory service. The result is that neither
work outside the simplest case. The names themselves are leased and thus not
stable and we don't have explicit directory mechanisms.
- While trying to make the DNS serve as a directory we have failed to make
it an effective replacement for the volatile IP address. The DNS mechanisms
should make it easy for my printer to publish a stable name for itself so I
don't have to even think about its IP address any more than I think about its
MAC address when using the Ethernet. Of course I should be able to have my own
set of names for the devices which I can choose to share. This is a topic
which I will have to write about in the future.
- Firewalls were temporary work-arounds to protect networks that grew up in
isolation and had no notion of protecting themselves. It violates the
end-to-end principle by acting as a very arbitrary meddler within the network
and thus makes it difficult to create new services. Sadly, as we see with
simple file sharing in XP, we have come full circle and the firewall has
locked us into the very problem it was meant to fix. We have fragmented the
Internet into a series of walled communities.
- Network Address Translation (the NAT) was a clever work-around that
allowed us to share a single connection to the Internet. I chose to use it for
home networking while waiting until IPv6 shipped. The fundamental flaw in the
NAT -- the inability for the internal systems to have a full public presence
is now considered a feature. It is a mindless version of the firewall.
- VPNs. Another cousin of the firewall. Instead of recognizing the flaw in
the walled community model it compounds the problems by creating wormholes
between the communities. The result is implicit dependencies upon the topology
and as reducing the integrity of the walls themselves.
- IPv6 is stalled because we've confused the needs of routing within the
Internet with the need to give end points a public presence. IPv4 happened to
serve both purposes in early days of the Internet. Since then it has become
more of a routing protocol than an address. The DNS was introduced to provide
the stable address (imperfect as it is) but it is still limited by the pool of
IPv4 addresses. The mechanism for efficient routing and the names of end
points should not be tightly coupled. This is actually reflected in the IPv6
by allowing the first part of the address to act as a routing prefix and the
latter part to act as a local identifier. Unfortunately the end point needs
still seem to be given short shrift by coupling the design with the backbone
requirements. IPv6 has been delayed by trying to accommodate the
reintroduction of stale ideas such as QoS and circuits (flows) from the old
- Naive models of trust have not only festered but have become endemic.
Simple (minded) file sharing has an us/them model with the assumption that
them is still us. Microsoft's Trust model and Verisign's certification give me
a Hobson's choice of 100% trust or 0% trust. I can't have qualified trust
which might allow a program to help me edit a picture but not run rampant
through my file system. This naive model of trust is endemic throughout
Microsoft's systems. Microsoft is far from alone in this naivet�. Identifying
someone as a spammer is little different -- I might want the catalog that you
consider junk. This is a complex topic in its own right and I will write about
this in more length. There are many related issues including confusing
identity with certification.
- The problem of 802.11 security is another aspect of the firewall
assumption. Instead of providing a simple end to end mechanism like encrypted
IPv6, we have a plethora of local security polices that are not only
confusing, but ineffective. What good is 802.11 security if I have to trust
the clerk managing the access point? Conversely if we considered the home
simply part of the Internet then I could safely allow you to use my home
network without increasing my exposure.
- Discovery is the process of taking a description and finding the resource
or binding. It is important but it must not be fundamental because it is a
complex concept. This is closely related to the DNS problem. We have a mixture
of mechanisms that are built in at a low level and thus limit flexibility and
mechanisms which are vital but lacking. Persistent relationships should be at
an application level but since the DNS is seen as the ".com" mechanism people
try to achieve mobility at the IP level. This doesn't make sense since the IP
address is now used for routing so we need to invent new mechanisms to make
the routing address mobile. This is simply perverse.
- Buses such as FireWire and USB do not have the resilience and scaling
capabilities of networks. FireWire builds in notions like isochronous that are
heavily biased towards naive legacy assumptions. USB requires that we identify
the device before we can work with it. If my system doesn't recognize my
pocket PC I can't synchronize yet the same device will synchronize over my
802.11 connection because it doesn't depend on the favors from a meddling
These design problems are not fundamental and can be addressed. The first
step is to be explicit about the assumptions and the interdependencies that make
it so difficult to move forward.
But as long as we accept the notion that the Internet is little more than a
new television network or an updated phone network or a shopping mail
(e-commerce) we'll continue to accept the limitations and frustrations and the
economic consequences as if they are fundamental.
Unfortunately the current generation of users and system designers has grown
up hobbled by implicit assumptions the frustrate simplicity. The real essence of
the Internet is simplicity. We must be aware the implicit assumptions that are
subverting this simplicity so we can start removing the perverse interactions.