COVER
RE: Broadband Connectivity Competition Policy Workshop - Comment, Project No. V070000 (Feb 13-14, 2007)
Name: Bob Frankston
Email: FTC-BBComments@Bobf.Frankston.com
Postal: 278 Lake Avenue, Newton Ma 02461
Site: http://www.frankston.com
Note : This document is available in both HTML and PDF formats.
INTRODUCTION
The title of the FTC workshop on “Broadband Connectivity Competition” assumes that the status quo makes sense and we only need to fine tune it.
What struck me most about the workshop is the lack of a crisp insight. There was a lot of talk about how complex the issues are and lots of fascination with the details of the current Internet. But there was a stunning failure to see though the complexity.
We can argue all we want about neutrality or we can recognize that bits are inherently neutral and reframe policy in terms of basic connectivity. Basic connectivity means we can create our own solutions rather than being required to buy services from a provider. We would not need to petition the FCC nor the FTC for neutrality.
Attempting to bring a service-based model in line with the principles of neutrality is futile and counter-productive.
PREFACE
My goal in this essay is to remove the mystery that protects the telecommunications industry from real scrutiny. The industry seems formidable and too well defended by the FCC and its regulations – or, as I am wont to say, the Regulatorium.
I approach the issues with forty years of experience in developing technology and products. The range of academic experience as well as writing VisiCalc and making home networks “happen” have been formative experiences.
When I look at the telecommunications industry I feel like the boy look at the emperor’s new clothes. I am not awed by its size—I just see an industry that is structurally unsound. It exists only because of the FCC. The Internet has shown us that there is an alternative that gives us more opportunity and value.
My challenge is to help others see past the mystery and have confidence that we can effect change. The first round of triple-play was thwarted by the very simplicity of home networking. Today’s interest in broadband was a result but the message got lost in translation. Broadband is not a simple “pipe” – it is a complex service delivery vehicle whose purpose is to return control to the incumbent providers.
The Internet exposes a fundamental structural flaw in the definition of telecommunications – the assumption that we must buy communications and networking as a service. The Internet demonstrates that this is not at all true – we can create our own solutions using the basic fixed cost infrastructure.
It’s that simple. And the solution is already at hand.
But as long as we treat this as a slow cautious evolution of telecommunications we are denying ourselves what we already have.
My technical skills permitted me to take advantage of computing when few others had the opportunity. Understanding information systems is less about numbers than working with abstractions and systems. This is why I’ve studied how people learn and think and how they develop concepts. It’s given us a language for understanding complex, or seemingly complex systems. The Internet and telecommunication systems are just that, systems.
While I don’t expect a web audience to read more than one or two pages, I hope that those who want to understand the issues will be willing to read a few more pages even if there is some redundancy as I go through the issues quickly and again from a slightly different angle and then in more detail. I’ve learned that it helps to look at issues from multiple perspectives, especially for a disparate audience.
My challenge is illustrated by the problem of explaining why the particulars of something as arcane as the IP address have a direct bearing on the governance issues for the Internet. The reason is simple – the IP address was meant to identify a computer system but, in practice it is tied to where the machine is attached rather than the system itself. This means that we don’t have a stable identifier and thus we created the Domain Name System. But because we used identifiers with semantics they became too valuable and had to be reused and thus they are not stable either. As the net grew the routing issues became too complex so the IP address described a path not just a place. ICANN was created to assign these names and numbers as the problem became more complex. If the IP address was stable and routing was simpler we wouldn’t need the same kind of governance.
See, it’s simple. Or at least it’s simple if you’ve spent forty years studying the topic. Perhaps I can’t explain all the concepts in a few pages but at very least I want to broaden the readers’ perspective.
For those who want shorter documents you can look at http://www.frankston.com/?name=OurCFR and http://www.frankston.com/?name=Perspective and other documents at my web site. You read more of my background at http://www.frankston.com.
This is being posted on the Internet – the version of the document submitted to the FTC is a version of my thoughts and explanations. I do not need to wait for a publication cycle to post new comments at http://www.frankston.com/public.
TELECOM VS THE INTERNET
The very existence of a Federal Communications Commission is recognition that the telecommunications industry would not survive in the marketplace on its own. The Internet has demonstrated that there is a viable alternative with the incentives aligned – an effective marketplace.
The Internet is not just another television channel that runs on Broadband pipes. It forces us to question the concept of telecommunications as a set of services.
The problem is that we are forced to choose among preselected services with the transport owned by privileged service providers who are most threatened by the abundance of the Internet.
There would be no problem to solve if we allowed the marketplace to take advantage of existing opportunities. The marketplace is really us creating our own solutions. The Internet is simply a demonstration of how well we are able to composite our individual efforts with the help of today’s digital technologies.
OVERVIEW
The concern over broadband competition and network neutrality stems from a sense that something is wrong or may go wrong.
When we look around the world we see other countries offering 100 megabit and even gigabit broadband connections. It’s easy to compare speeds but much more difficult to understand how the connectivity is being used. We’re confusing the Internet with television. The focus on speed demonstrates a fundamental misunderstanding of the very nature of the Internets – it’s as if we were talking about television and not the Internet.
By focusing on speed we fail to address needs that are vital to life. Medical monitoring is one example. The focus on speed leaves us disconnected and adrift everywhere else. If instead we focus on taking advantage of the basic connectivity – even at a very slow speed, we will create value. Once we understand the basic concepts of connectivity it will be utterly and completely trivial to get speed.
I know this from firsthand experience – our home networks run at a gigabit per second with no service fee but only because we didn’t make speed a requirement as I explain below. We were free to use very inexpensive technology and providers were forced to improve the technology while keeping the cost down in order to sell new products.
The reason we are concerned about network neutrality is that we cannot do the networking ourselves. The transport is owned by service providers and we must buy networking as service with an ongoing fee.
There is no fundamental difference between the physical network in our home and the physical networks outside our home. We have to buy the service because we are not given a choice of doing it ourselves. Why pay $50/month for services that are often a thousand times slower than the networks within our homes?
Networking is something we do – why must we pay others do it for us?
In the service model the infrastructure is deployed by a small number of service providers to meet their needs. We call them carriers but they are really providing services such as telephony and networking. They built the transport in order to support their services. This worked as long as they were selling high value and high priced services such as telephony and cable TV. They can’t charge a premium for mere bits and thus the more we do the networking ourselves using the Internet the less revenue there is.
This is a fundamental dynamic. The cellular industry’s “IP Inter-working” plan acknowledges that abundant capacity is a serious threat and they are explicit about forcing users to buy services instead of creating their own solution. IMS is the land line equivalent of the cellular’s 3G technology.
Neither is necessary – our experience in using the Internet leaves no doubt that we can do far better without the “help” of such services. Not only can we do better but the technologies are evolving hundreds of times faster than the legacy 3G and IMS technologies. It’s the same dynamic that doomed X.400 against SMTP.
It is also important to recognize that current Internet is only a hint of what is possible. It was a first attempt to interconnect our local networks and was intended to be a prototype. We can now do networking from the edge ourselves – there is no need to pay for networking as a service. Without a central source of networking the technology can improve very rapidly.
The Internet has become so important because of what I call the opportunity dynamic. At first there was little capacity and all you could do was send email and exchange files. These few applications were enough to create a demand for a few more of those inexpensive bits (AKA capacity). I call it the opportunity dynamic because there are no “solution providers”. Instead the users had to find out what is possible – they took advantage of existing opportunity.
The Web gave normal people a reason to use the Internet and the demand grew rapidly. In effect, the demand created supply because it could use any resources available. Yet the price could not rise because there was no mechanism for identifying high quality bits (or paths) from the rest.
Businesses, like Genuity, tried but failed to realize the benefits of the value they added. Yet if we cannot raise the price of bits above the cost the telecommunications industry cannot fund its investment in infrastructure.
Non-neutrality is the ability to price some bits more than others but as we see this is impossible unless we assure scarcity and prevent users from creating their own solutions.
No wonder neutrality is such a divisive issue. But ultimately the effort is futile – there is no way to prevent users from redefining the problems they are solving to take advantage of opportunities. The Internet is not about any particular technology. It is not a service.
It is about nothing more than our ability to create our own solutions by taking advantage of opportunities. This is the end-to-end argument.
The only stable solution is to recognize that the physical transport is an entity in itself – in fact, it is just copper, fiber and radios (CFR) that we “light up” to do our own networking locally. We then interconnect the systems. It is somewhat like roads but without the physical encumbrances and with vastly more capacity. You cannot value a bit out of context and cannot price a bit above cost because there is a vast sea of bits and no natural chokepoint.
This is very good news because we’ve already paid for abundant capacity – but that capacity is kept off the market because the funding model requires scarcity in order to maintain a price floor. The telecom industry publicly acknowledges that they are threatened by abundance and therefore must limit customer access to the basic transport.
The solution is obvious – make the CFR available as a basic resource under local control like the roads. We have ownership at various scales from our homes, to our cities and beyond.
The real question is why we are so reluctant to face this simple and stark reality. Partially it is a failure to understand why the Internet works thus. We presume we need a company that provides “Internet” just like we need a company that provides “phone”. But neither is true.
The other reason is that the change seems to be too drastic but if the alternatives are not viable then we must face up to the consequences of such a constraint. The ATT divesture gave us a small taste of such a realignment. We can also look to the Savings and Loan crisis when a trillion dollars was allocated to correct an industry that had become terminally dysfunctional.
Congress had to face up to S&L crises because it was happening and we could see the money being lost. The problem of the Internet being stymied by the telecommunications industry is not as obvious because we are inured to the problems and accept this as necessary and we fail to see the lost opportunities.
The concerns about Broadband Competition and Network Neutrality demonstrate a vague sense that something is wrong. But as anyone who has succeeded in business knows you can’t simply implement customers’ inarticulate requests. You have to look ahead and provide what they will actually buy. They might have wanted faster horses but they bought automobiles.
The CB fad is very similar to the cry for more broadband – shortly after the number of CB channels was increased cellular phones were introduced, if not universally available, and demand for increased CB capacity vanished.
A better “Internet” is not broadband – it is pervasive connectivity. It doesn’t matter if it isn’t very fast at first as long as it is available everywhere with or without a wire. As we’ve seen the capacity will increase very quickly because we already have the technology in place – but we’re locking it up within the broadband pipes.
The FCC exists to assure the viability of an industry that can’t survive on its own. In 1934 there seemed to be no alternative but today there is no excuse.
The more you use the Internet to create your own solutions the less service revenue there is. If we simply did away with the FCC the marketplace would be more like today’s home networks. If we could use the basic fixed assets (CFR) to fashion our own solutions we would quickly have multi-gigabit connectivity at little more than the cost of a few strands of glass. Or we can continue to maintain a chimera that locks away this infrastructure behind the services model with a few privileged service providers limiting our future.
MY OWN EXPERIENCE
When I was at Microsoft I initiated and stewarded the home networking effort thus creating the demand for broadband. I built upon my experience implementing connected computing systems since 1966 and those of my friends and colleagues who played key roles in making the Internet what it is. As an observer, as well as a participant in the growth of personal computing, I’ve tried to understand the processes that have given us the kind of hyper-growth we associated with Gordon Moore’s law.
We tend to associate Moore’s law with physics but it’s really about marketplaces. It’s a testament to the importance of antitrust principles in assuring we all have opportunity to choose our own solutions. In the case of Moore’s law it was the separation of the hardware and software marketplaces that gave Intel a chance to provide generic solutions, including one that happened to be the personal computer—even though Intel itself thought it was providing chips for minicomputers. Fortunately Intel didn’t force users to use the chips as intended. It made its money based on the sales of chips and not by getting a percentage of the value created by those using the chips.
The Internet is defined by the End-to-End principle which decouples the physical transport from the solutions created by those using the transport. I purposely say physical transport since the term “facilities” tends to be associated with particular services and that over constrains the solution. To understand the difference it’s useful to compare the gigabit networks in our homes with the networks outside our homes. The facilities we use to connect with the rest of the world are only available as a subscription service at far lower speeds and at monthly prices far higher than the one-time cost of the networking equipment we use in our homes. Yet the wires between our homes are fixed assets just like those within our homes!
When I first started the home networking effort today’s broadband providers wanted to provide networking as a service with ISDN and then ADSL being the transport. I explicitly chose to avoid this model and make home network very simple and inexpensive. Simplicity is a prerequisite for transparency in the marketplace. The home network is simply too simple to force people into buying networking as a service. They can choose to pay for assistance but they don’t have to – at least not until the broadband providers regain control. But I am getting ahead of myself.
The most important lesson I’ve learned as an implementer and observer and academician is that complexity represents our failure to find the essential simplicity. A complex system can’t work if the elements are too interdependent – and if they aren’t then we can find simple decompositions. This is what Copernicus did – he didn’t disprove Ptolemy. He found the simplicity by shifting the reference point.
It’s difficult to communicate when we use the same word but mean different things. Central to the issue is the word “broadband”. To many it simply means “Fat Pipe”. But in practice it is a delivery system that is defined in terms of traditional service models. The Internet itself is treated as a service called “Internet”. We should avoid using the term “broadband” if we want clarity but it is too embedded to ignore.
The services are only available in billable paths. Thus the Internet cannot be used as infrastructure in its own right and there cannot be wireless availability with a myriad of billing arrangements.
WHAT PROBLEM ARE WE TRYING TO SOLVE?
BROADBAND VS NEUTRALITY
The implicit assumption is that the Internet is very important for the economy and society as a whole and we should do what we can to assure that we continue to get this benefit. The Internet represents a level playing field which has given us all the chance to create our own solutions and we have. If the carriers play favorites they would be in a position to tilt the playing field and return us to the days when they are in the position to pick favorites and create price hurdles that prevented the creation of new solutions.
The idea that the network itself should be neutral and not discriminate in favor of what is most profitable to the carriers is a laudable goal and, in a sense, the essence of the Internet’s defining principle – the End-to-End argument. But we have to be careful about interpreting it. At very least we need to recognize that End-to-End is not “Womb-to-Tomb” and carrier jargon like End-to-End QoS (Quality of Service) shows a failure to understand the fundamental nature of the Internet. This misunderstanding is endemic to the service-based specifications of the broadband networks. These are smart networks in David Isenberg’s terminology. Very simply, a stupid network is neutral and a smart network is built to favor particular kinds of traffic. The industry’s own documents (such as Fig 14 in this review of passive optical technologies for broadband networks) are explicit in citing improved revenue for the service providers as an explicit goal.
We seem to be bounding the problem by wanting more broadband and more neutrality even as they are opposing goals.
To resolve this we need to understand the fundamental nature of the Internet itself. At the workshop (Feb 13th, 2007) the first panel gave a talk on how data is routed over the Internet. These technical details only hint at the reason that the Internet has had such a large impact on society and why it has created so much economic value.
Yet even if we limit ourselves to the technology itself we find ourselves caught up in an inversion. The telecommunications industry is defined in terms of services. In the days of analog signaling it made sense to build a special infrastructure for each service since the signal was inevitably distorted and the solution was particular to each kind of signal. Today the very idea of broadband competition acknowledges that this is no longer true thanks to digital technologies that allow us to regenerate the signal perfectly no matter the distance. There is no longer a distinction between wired and wireless bits.
Again, I rely on personal experience. In the mid 1990’s we had the first attempt at triple plays in which a carrier would provide a package of services. It was even more ambitious than today’s approach since other service providers such as the gas and electric utilities were expect to rent capacity from the triple play provider. Sprint ION was a classic example – they would provide a single pipe to the customer’s premises. If you wanted a second phone line they would be able to instantly provision you with the additional service. The problem is that once people had home networks all of the services could be delivered over a generic IP transport. As far as the carriers are concerned those services simply do not exist on the network itself – they are defined only outside the network by the users.
In 2003 the Vonage demonstrated that voice over IP (VoIP) had come of age and “just works”. This was done without QoS by necessity since QoS requires having complete control of the path and the ability to set absolute priorities. Today Skype is a better example as it goes well beyond emulating traditional telephony. Calls within each VoIP community have no cost above the cost of the transport itself. This is just like email. Voice calls to users on the traditional phone network do cost money because they become billable services above the cost of the transport itself. Voice bits are priced at a premium.
We have the same situation with video bits – there is no cost above the cost of the transport for video bits though they may require a higher capacity connection. Video bits that use the traditional cable transports are charged according to the value of the content rather than the bits themselves. If you compare the cost of the video bits to the cost of the voice bits you’ll find huge discrepancies. The low cost of the video bits demonstrates that there is indeed abundant capacity in the first mile – the domain of broadband.
Their playing field seems to be almost vertical.
THE NATURE OF THE INTERNET ITSELF
In order to understand what is going on we need to understand why the Internet has become such an important part of our technical and social infrastructure. The description that the panelists gave at the workshop is basically correct – you can send a message from any device to any other device by simply placing the destination address in a packet with the data bits and then sending it off.
This seems very much like a traditional network – you put bits in one end and they come out the other.
In fact there is essentially no relationship between traditional telecom and the Internet except that the Internet can tunnel through the traditional telecommunications service paths in the same way we used modems to send data across the voice network. More important is that we can emulate the traditional telecommunications services over IP – that is what we do with VoIP.
THE ETHERNET TO THE INTERNET
I’ve been in the fortunate position to observe much of the history first hand. Even as I focused on personal computing, the two communities overlapped. David Reed, an author of the End-to-End paper is a friend and was working with me at Software Arts when the End-to-End paper was published. His insights were invaluable when I set out to remove the mystery from networking so that anyone could create their own home networks without relying on a service provider or installer.
Though I didn’t realize the significance at the time (May 1973) I happened to be sitting next to Bob Metcalfe when he described his class project – the Ethernet. It built upon packet radio technologies such as Hawaii’s Aloha Net. To those of us used to string wires the prospect of using a Coax as a 3mbps transport was exciting. As a programmer the end-to-end argument was implicit. Of course it would work. “Proving” that it could work was harder and that’s what earned him his doctorate.
The reason for the different interpretations goes to the heart of the difference between the Internet and Telecom.
I didn’t have any particular requirements – I just wanted to use a 3mbps network and learn what I could do with it! Even before I had tried it I wanted to run it over the campus cable TV network and soon learned I’d have to have an up channel and a down channel because that’s the way broadband worked. Ethernet was base band and used the whole coax but on a cable system I would have to use channels or bands. Too bad I didn’t follow through then.
The thesis committee had performance requirements that had to be met. Thinking back that’s very strange – so what it if performed poorly? We could tinker and figure out how to improve it. In fact, there were other approaches such as “token ring” which passed a token around instead of sensing the carrier. In the end it didn’t matter much – we since exceed the most optimistic performance estimates anyway.
It is this difference between seeing opportunity and setting requirements that is at the heart of the policy issues.
The opportunity dynamic is very simple – if you provide opportunity people will find problems that “already” are solved. This is far more efficient than having to create a special solution for each individual problem. The opportunity dynamic works especially well with digital packet systems. Because we can’t depend on the particular path or transport we are able to take advantage of any transport available. At each point there is enough value to create a market for more capacity and that capacity enables new applications – it’s the very powerful virtuous cycle we call Moore’s law.
If we provide opportunity rather than just narrow solutions demand can create supply! There is no hidden hand of the marketplace – we can drill down and see how it works.
The process works especially well with digital systems because we can preserve the fine distinctions between viable approaches and those that fail. Because we aren’t dependent upon every element operating just right we have a resilient solution and can solve our own problems. Because we are creating solutions in software we can easily share them.
The success of the Ethernet itself rather than any abstract theory set the stage for the Internet. In trying to do the next version of the Arpanet the designers found that there were already many different approaches to networking on the various local networks. The best one could do was carry packets without assuming any context. Without the ability to control the local networks at either end the best they could do was best efforts delivery.
It turned out that this constraint was liberating. It forced a discipline that freed the users from dependency on the particular services and thus the transport providers lost the ability to charge for the value of their services. The transport providers didn’t even have the ability to maintain the relationships in the form of a circuit.
LOSING CONTROL
Without this control, as we’ve seen, transport is no longer a viable business in its own right. Too bad so many people are trying so hard to maintain the fiction that there is such a business and then trying to compensate for the distortions by reintroducing neutrality. It’s as if we confused the Maraschino Cherry with the real thing.
It’s important to understand that the IETF – The Internet Engineering Task Force – operates as a community rather than as a governing body that can enforce conformance. The IETF approach works very well because only those solutions that work are adopted by the community and there is a strong incentive to join those communities. Yet some ideas – like multicasting and ratings for web pages – haven’t caught on because the marketplace has not found them to be as useful as the designers thought.
This is in sharp contrast with ICANN which operates with the force of law and needn’t worry about whether its approach makes sense. Thus we are stuck with a Domain Name System that is anti-capitalist because we can only lease the names and cannot own them. Because of this the web will unravel over time as the leases expire and the names are reused. ICANN is the face of governance and the Internet has to survive such favors. No wonder the root server operators resist ICANN’s attempt to take control of their world.
The Internet is a marketplace that allows anyone to define new services. Many of us have taken advantage of the opportunity. The web is just one example and in the tradition of the Internet it is also very simple. We see this simplicity in the mail protocol – SMTP. The http: and mailto: are actually descriptions of what protocols to use and it is easily extended.
In fact the whole Internet is pretty trivial because only simple systems scale and a simple system allows everyone to participate without having to go through a long process of proving themselves to the old guard. This is what gives the Internet its vitality. Complexity is a barrier to entry and simplicity, with the ability to survive mistakes allows experimentation and discovery. The Web was created by one guy in a basement trying out an idea. One of millions – and it is only a hint of what is to come.
Experiments don’t always turn out well but that’s OK if we’re tolerant and can deal with others’ mistakes. We’re taught to be careful lest we harm others and regulators seek to preempt harm. But we can’t anticipate all eventualities. By putting the emphasis on resilience we get the benefits of discovery. By learning how to survive mistakes we also gain a degree of immunity to malevolence as well. This is the lesson of the body’s immune system–bubble babies do not fair well in real world.
Efforts to put more protection into the network are ultimately futile because of the ambiguities and doing so means that attacks that get through are far more successful. We do need to find a balance -- many older systems were designed for benign environments and do need some protection while they adapt.
Yet the current telecommunications industry goes too far in claiming that it can tell “good” from “evil” and can sell this ability as a service. They also use it as one more reason that all traffic must converge on their servers (IMS).
The complex protocols of traditional telecommunications are a breath of dank air by contrast with the transparency and simplicity of the Internet. The incumbents are forced to adopt Internet protocols such as SIP and messaging protocols because their traditional protocols are simply not competitive. But by turning evolving protocols into rigid specifications all that remains is a desiccated husk. We see this with MMS—you can’t MMS between carriers and even within a carrier you must have just the right devices and versions. It’s just email but sufficiently different to require every element of the system be adjusted so one can bill for sending pictures.
You can also see this in the router Verizon provides for FiOSTV. When I was doing home networking the details of the router were under specified so we could experiment and innovate. The evolution continues – new routers may support IPV6. But in order to get FiOSTV I must use Verizon’s router. I cannot take advantage of new technology until Verizon does. There is no reason for this – in fact they have a simpler bridge – the Motorola NIM100 but they refuse to release any documentation on it and they even claim it is not available. These problems are endemic to broadband. Verizon is now installing RG-6 they control rather than using my existing Ethernet wires. This is reminiscent of the old days when they owned my phone wires.
It seems profligate to run new RG-6 cable rather than using my existing network. RG-6 is the thick cable that is necessary to preserve analog video signals – it is clumsy. For those without networks 802.11 can work fine and requires essentially no installation. In practice VoD stream is only 20mbps per second and the RG-6 is only run at 100mbps – this is well within the capacity of standard networks. Microsoft’s Media Center PC adapts to the available capacity – why does Verizon insist on spending large amounts of money to take back control of the wires inside my house and turn back it into a service?
My gigabit home network hits a wall and gets a thousand times slower as it becomes a limited speed billable service upon leaving my home. I’ve compared broadband with a trolley line to each home. Why can’t I take advantage of the gigabits of network capacity available between me and my neighbors just because there may be a constriction at the edge of town? Do we limit the number of cars on local streets based on the capacity of the interstate highways?
It may seem as if we need guarantees in order to make a phone call. It certainly seemed that way in 1980 but because we didn’t force the Internet to provide such services we didn’t infest it with special knowledge inside the network. This allowed the opportunity dynamic to work and today we have far more capacity and quality over the Internet that a service network can provide because of the very nature of the promises a service provider must make.
Yet we fail to learn from the past and today we are infesting our networks with purpose-built broadband systems that assure we can watch TV and assure that we cannot do low value applications like medical monitoring even though our lives may depend upon it.
One caveat – today’s Internet is a prototype and it does have dependencies upon a central backbone provider and allocations of scarce IP addresses and DNS names. This is not intrinsic. Today P2P protocols such as Skype, SIP and many others work around these limitations. The next generation of the Internet is not Internet II – it will be far more distributed without depending on a central provider. It will provide stable relationships without the need for the DNS and mobility will be fundamental – but that’s another topic.
THE INTERNET AS A MARKETPLACE
The Internet is about marketplaces and responsibility. The essence of the end-to-end argument is that ultimately we have to take responsibility for our own solutions. Even if the traditional telephone companies promise us that they will provide a very high quality phone services we still have to be prepared in case they can’t. If you call someone and they don’t answer that’s still a complete call as far as the phone company is concerned but it does you no good if you can’t reach the other person. They can’t guarantee that a backhoe won’t cut their wires. The cellular companies can’t promise that you’ll get coverage in your home. These failures may be forgiven in terms of traditional telecommunications policies but they count in the real world and you have to be prepared to deal with it.
If you are forced to take responsibility you become empowered. Buying a service like voice telephony becomes an option and not a requirement. The amazing “discovery” is that you can use the Internet for voice telephony without having to depend on promises (AKA QoS). The results may be surprising but in hindsight it is obvious – it’s about very simple statistics. If you have enough capacity relative to your application then the odds of your packet getting through are sufficiently high that you can depend on it. And herein lies a seeming paradox – you can treat the Internet just like the traditional phone network and become dependent upon its ability to carry voice calls.
So what’s different – why is it so much less expensive to use the Internet if they both carry voice calls?
At a purely technical level the phone system “knows” you are making a voice call and can charge you for the value of the call whereas the voice bits are no different than any other bits. The call itself doesn’t exist as-such on the network. The relationship is only defined by the end points and the fact that it’s a phone call is only an interpretation. You are not dependent upon the particular path. In fact you can’t take advantage of a “better” path and thus the providers can’t charge a premium for quality. We saw this in the failure of Genuity – the only way to get quality is to provide sufficient capacity. If you charge a premium for quality you can’t afford to buy sufficient capacity and the network becomes less valuable!
Yet despite the seeming reliability of today’s Internet we must not become too complacent. There can still be constrictions and outages. If you have limited connectivity you may not be able to talk but you can still send messages. This can be life or death in an emergency. With the cellular network you can either talk or not. If you are creating your own solutions and don’t have sufficient capacity for voice you can still send text messages or even just a call for help.
If you can’t control the path you lose the ability to create billable events – it’s akin to placing a toll booth in the middle of open field. If you can’t bill by the path and you can’t charge according to the value of the services then how will you pay for all the capacity you need.
Yet, as we’ve seen, the current funding model is dysfunctional because it depends on keeping the price above cost by limiting capacity. The bigger problem is that the funding model depends on the value of services. This made sense when the infrastructure was built as a way to provide services but today this funding stream is threatened by the Internet itself. If we create our own services then the revenues decrease. The more we can do ourselves the less money there is to pay for the capacity.
This is the heart of the problem – the carriers are being asked to and are choosing to pay for the infrastructure by selling services. Yet we are asking them to be neutral and not charge for the bits that are used when we create the services ourselves.
In reality there is no problem – the problem exists only because we have defined a telecommunications industry in legislation and have protected it from normal market forces. The cellular industry itself warns about the danger of abundance. If they didn’t limit our ability to create services they wouldn’t be able to compete with their own customers. The actual costs of the infrastructure are so low that we typically deploy extra fiber “just in case” when we build a highway. But we don’t light it up lest we flood the marketplace!
FROM SERVICES TO DOING IT OURSELVES
The defining issue is not network neutrality itself –that’s just a symptom of a larger problem. And there is no broadband gap to be filled because broadband is the wrong concept.
As I explain in http://www.frankston.com/?name=OurCFR – the Copper, Fiber and Radios are fixed assets. As I demonstrated with home networking there is no mystery. The same is true about the rest of our infrastructure. We should fund it as a fixed asset and hire companies to operate the network and not tie the costs to the value of the bits.
This will enable us to discover new applications that take advantage of the inherent low cost and availability and abundance.
These are key points – the real cost is low. If we can afford to pay for three broadband infrastructures then sharing one will reduce the cost by 60% or more. If we aren’t billing by the path we will quickly assure that all access points have a face that it is opened to the world. We already have essentially 100% coverage in urban areas already and it’s very easy to extend the network when we can all contribute. It’s more likely that we’ll save money rather than have a cost.
But we need to get over the naïve notion that the money we pay for phone calls and for carrying video content is anything more than an artifact of a regulatory regimen. If we pool our funding to support local CFR locally and regional CFR regionally we may see no additional expense though there may be a start up cost. It will certainly be far less than we are paying for the services now.
The fear of change is partially due to the false idea that we would bring back the old natural monopoly and fund it via taxes. Just the opposite – these are local common efforts. And paying for shared infrastructure is a cost savings even if we may use the T word to describe some of the funding. In reality it’s more likely that we’d reduce our taxes thanks to a more efficient infrastructure.
We’d also get far better safety services than today’s E911. E911 is a very antiquated system that is dependent upon a brittle infrastructure and manually operated communications that aren’t very useful in real emergencies. Using what we’ve learned from the Internet we can do better. Again, not theory, experience. On 9/11 and during Katrina the Internet kept working and was easier to repair than the traditional networks.
In fact, the lessons are very clear and unambiguous. We are prisoners are of our ignorance. We presume there is something magical about phone calls and television that require companies that specialize in transporting those special bits. These service providers continue to claim that they need their own private transport even though we now expect to be able to use the Internet for video as well as voice.
“Broadband” implementations build the services into the transport. By positioning “Internet” as just one service they are able to preempt the neutrality issues by insisting their service bits don’t count. Broadband is extremely non-neutral with perhaps 99% of the capacity held off for the providers own services! Ignoring this makes the whole debate over neutrality seem pointless.
We are continuing to improve our ability to fashion our own solutions despite the carriers and service providers that don’t own the facilities are finding ways to bypass the need to pay a special fee just to get their bits transported. Hollywood is “downloading” which means sending the bits through the Internet pipe. By sending the data to PCs they don’t rely on a high performance connection. I give many examples but it’s pretty obvious that the carriers can’t maintain their control
The idea of neutrality is useful as a concept – it is properly associated with fairness. But we can’t superimpose neutrality on a funding model that is inherently non-neutral.
WHAT TO DO
A SIMPLE FIRST STEP
A simple change in policy would set the stage for further action. We must finally acknowledge the services are fundamental – bits and connectivity allow anyone to create services.
Bits are inherently neutral. We can then work out the implications of acknowledging this simple reality.
FACING REALITY
As we’ve seen the Internet is really about a marketplace dynamic that empowers us to create and share our own solution. We do the networking ourselves using the basic raw materials – copper, fiber and radios. These are fixed assets so there is no ongoing cost to using them.
Because there is no conflict with service revenues there are no reasons to limit the capacity. By decoupling the particulars of the transport and the route we enable Moore’s law hyper-growth. This is why a gigabit router is far less expensive than a 300 bit per second modem was a few decades ago. I have to write out “bit per second” because today we simply assume at least kilobits.
The service model of today’s telecommunications represents an inherent and unnecessary conflict of interest. It forces us into a non-neutral funding model and assures scarcity.
The solution is “Don’t Do That”. But we’ve already done that – we’ve created the FCC to protect this chimera from the marketplace.
The solution is to just stop doing that – stop doing harm by hobbling the economy and denying us our unalienable right to communicate and to create our own solutions.
While we can invoke antitrust principles the situation is far more extreme. As we’ve seen, the telecommunications industry knows that without special privileges it is not viable.
TRANSITION
The transition must be done in the spirit of the Internet itself by providing opportunity rather than guarantees. With a hundred thousand communities some will get it right and others won’t but they can all learn from each other. If we try to assure that there is no failure then they will all be equally limited.
Our model is less antitrust than a recovery program. In practice the first step should be a better understanding of the Internet but that may be asking too much. Understanding is more likely to come from experience so we need a path that enables us to discover what is possible.
We can start with a very simple physical model. I call it the CFR (Copper, Fiber, and Radio) model. Unlike structural separation, layers and other complex rules this is very simple. We own those transports collectively just like we own our roads and driveways and highways.
We can then realign the assets – both the physical infrastructure and corporate – using this model. The model itself represents a viable and stable market of the kind that would have happened on its own if we hadn’t worked so hard to prevent it.
This kind of transition is not at all unprecedented. The ATT divesture happened because ATT itself recognized the need for realignment. In retrospect the idea of baby bells wasn’t viable but what is important was that ATT bought into the process.
Today the carriers understand the problems they face but they were taught in business school to try to keep their business alive no matter what – you don’t admit defeat. This is a risky strategy and one that invites scrutiny by the SEC as well as the FTC.
The other precedent is the S&L crises where we papered over a trillion dollar hole. Fortunately the cost/benefit is such that we would actually reduce the costs and create value (including new tax revenues). But we may need to paper over the transition with bridge money on the order of billions of dollars.
THE EVOLUTION OF THE INTERNET
Today’s Internet is a prototype built using 1970’s technology and it has done amazingly well considering some of the fundamental design problems. The IP address is not a stable identifier so we need the DNS but then thanks to governance we’re invested in a DNS which is even less stable – we don’t have ownership of the identifiers. Thus we don’t have long term stability.
The basic design has a transport layer which is the kind of dependency that compromises end-to-end. You have to request an IP address. This means you can’t use the current protocols inside your house without depending upon some remote authority. The V4 address itself is also too limited to be used freely.
Reinventing the Internet from the edge will require revisiting these assumptions. The good news is that this process has been going on for many years in the P2P community. Applications like Skype have minimal reliance on the IP address and the DNS.
One of the major benefits of the end-to-end principles is that the current Internet and new approaches can coexist – there is no need for a formal transition. New applications can take advantage of the new protocols while existing applications will continue to work.
On day one of a transition everything will continue as it does now. The key is that it’s not just CFR but CFRI – Copper, Fiber, Radio and Internet. We can emulate existing transports even if we change the technologies underneath.
Broadband implementations are just the opposite and each service is tied directly to the physical characteristics as if we had learned nothing from the Internet!
Thanks to end-to-end we are not dependent upon the accidental properties of each transport. Thanks to statistics abundant capacity even demanding applications can get the benefits of the Internet without having to build their own transport.
INFRASTRUCTURE
The biggest benefit of this new model will be our ability to take advantage of the Internet as basic infrastructure. Thanks to abundant capacity and the ease of extending the transport it will become very obvious that you connect elements using the Internet.
“Using the Internet” really means defining relationships and, independently, assuring there is transport. If you want to control a traffic light you just setup the relationships and you’re done with it.
This is even more important for emergency services – E911 will seem painfully primitive when we can elect the kinds of rich monitoring we want and choose who we want to monitor.
The biggest change will come because once we have ownership we will immediately have 100% wireless coverage because there will be no reason not to and we already have coverage from access points. What is missing is a simple software change that would provide a public face for each wireless access point while protecting legacy local systems within a protective bubble.
That is 100% coverage with no additional investment. And that’s more the rule than the exception. If we have three broadband systems then melding them into one would produce large benefits. After all, we have a single electric distribution system because having two would make no sense. For connectivity it makes even less sense since it is more like the roads as a system we use to communicate among each other than to get others’ programming as we do now.
ALL THOSE DETAILS
One of the big lessons of Moore’s law is that everything is impossible until you do it. The secret is that we don’t solve hard problems we take advantage of opportunities. To put it another way, we don’t do the impossible but there are so many possibilities that we find more opportunity than we could’ve expected.
But if we take a worm’s eye view of any transition and focus on a single path then any obstacle is an impediment and it is truly impossible.
Yet if we look at the situation in terms of constraints – we have the current telecommunications industry which is not viable and we have a connectivity model that is then any path to get from one to another will work and the change must happen.
The question is not whether but what we can do to facilitate the transition. The alternative is wait, as we did with the savings and loans until we can no longer avoid a very disruptive change. The only question is how disruptive the transition will be and how long we will be denied the benefit of pervasive connectivity
BUSINESS ISSUES
It’s important to recognize that we are not meddling in the marketplace – just the opposite – we are actually returning control to the marketplace. It is very important to avoid confusing this with “deregulation” which removes the controls while leaving the basic dysfunctions intact and, in reality maintaining the regulations that shield the industry from both the marketplace and scrutiny.
There is no need to “take” – we are rearranging assets. We can be strict and argue that if the industry is not viable then we needn’t provide any compensation. I am also sympathetic to the argument that the industry exists only because the government has promised to assure its viability – no matter how futile it is.
I’ve called this the “stole it fair and square” defense. But it is far more important to get the benefits of a change than to punish unwise investments. We can elicit the industry’s support by treating this as a realignment with money used as a bridge and to compensate those who bet on regulations against the marketplace.
It is a little more difficult to compensate industries that are secondary players. Radio networks have assumed that spectrum allocation gave them privileged access to the marketplace. If we have pervasive connectivity they are no longer special.
These problems can be solved with money and thus they cannot and must not be used to prevent transition.
Some of the issues are less obvious – the US Congress seems determined to impose “digital television” on us along with its stepchild HDTV. The D in DTV means digital but the HD in HDTV means high-definition. There is no relationship between the two. The entire policy has played out over years and is based on the presumption that spectrum allocation is vital and that over the air broadcasting matters.
Once we have wireless cover there is essentially no value in having a high powered radio that can reach great distances in a single hop when everyone can reach the entire world via the Internet. Yet, as with broadband, the entire approach embeds architectural assumptions that assure that we will not do any better than 1995. HDTV is 1920x1080 –only a portion of my 2500x1600 computer screen. But the womb-to-tomb distribution model requires a new industry for each new resolution. With End-to-End HDTV is just one more low resolution format.
Is Congress capable of saying “never mind”?
TECHNICAL CONCERNS
Even those technologists who think of themselves as network-centric are often vested in the status quo and accept its premises. They attempt to “fix” the problems they blame on end-to-end by building more knowledge into the network. This is why I refer to end-to-end as a discipline. If you have too much control it’s too easy to fix problems without realizing that such fixes preclude other options and thus reduce the opportunity for users to find new solutions.
But we are going to see a far more distributed Internet without a central authority. We know this can work because the Internet is really a patchwork of individual efforts.
We must understand statistics and what happens when we go from few to many. I compare it with the difference between the behavior of individual particles and the behavior of waves. Voice over IP “just works” when we have enough capacity. If VoIP requires only a small portion of the available capacity then statistics will assure enough packets will get through to make it work well.
Yet we mustn’t forget that we are talking about opportunity. We are not guaranteeing that voice or video will work. The paradox is that if we try to assure they will work we will ultimately limit the capabilities of the network whereas if we don’t, as we’ve seen, we will have abundant capacity.
We see this in broadband – if we deploy a single fiber we can deploy a bundle with little additional cost. We get gigabits for the cost of megabits. There is a cost to operate or “light up” the fiber but the costs will come down rapidly once we have a large marketplace. Our willingness to pay will go up as we begin to understand the benefits.
THE QOS FALLACY!
The big bugaboo is “QoS” or “Quality of Service”. The term “Quality” is misleading since the concerns stem from the presumption of scarcity – the defining assumption of traditional telecommunications. If you have to be very careful about doling out resources then you need to find the minimum resource allocation necessary for satisfactory service. Thus the phone network in the US allocates 56Kbps for each phone call (64Kbps in Europe). This is considered enough – that doesn’t mean it is high quality. Skype can do far better because it has no upper bound.
Yet, in practice, you can’t make such promises. First the QoS guarantee is conditional – if you can’t get your allocation then you can’t make any call at all. Secondly on many links such as those across the oceans and for cellular systems the carriers rely upon statistics and can’t enforce the promises. We saw this dramatically in the days of analog cell phones -- quality was the ability to make any call even if it sounded bad.
Most important is that you can’t guarantee quality unless you control the entire network and have a simple set of priorities. We confuse the ability to reserve capacity for a phone call in a phone network with the ability to decide what messages are most important. Is my EEG less important than a given phone call? If the traffic is encrypted you don’t even know what the bits mean anyway.
Unlike the phone network the packets aren’t marked with a billing relationship so a router inside the network can’t even know which bits are most important.
In fact, if they did need to play favorites then there isn’t enough capacity and the queues will quickly fill up and the network would be in failure mode anyway.
Fortunately we don’t have to rely on QoS – VoIP must work without it because you can’t have a business that depends on third parties – often competitors – doing just the right thing. In practice the only viable solution is to add capacity.
Those who’ve tried to implement QoS for the Internet, as in the Internet2 project, have discovered it doesn’t even work.
Yet the broadband providers have built QoS limitations into their basic architectures and have argued that the Internet was built for data not video bits (see http://www.mocalliance.com) even as the carriers themselves are migrating to IP. This paper from UC Davis explains how broadband increases the service revenues for the incumbent providers.
QoS is very attractive because if it is necessary you need carriers and you can charge a premium. This can’t work if path doesn’t matter.
QoS is the face of network neutrality – it may seem like a vital issue but ultimately it’s nothing more than a fantasy. To the extent the carriers believe they will find refuge in non-neutrality they are fooling themselves. But they should know it can’t work so ultimately they are fooling their investors.
I can go on but no matter how you look at it QoS is an assumption which justifies and even requires non-neutrality. The argument is really about whether we need to continue the policy set forth in the 1927 Supreme Court decision that endorsed the Federal Radio Commission’s right to override first amendment rights and pick winners and losers.
We no longer have the scarcity that justified compromising the First Amendment. So why are we treating the US Constitution with such contempt by granting third parties the ability to limit our ability to communicate.
UNDERSTANDING THE INTERNET
As we gain experience and begin to understand the Internet we will gain more than we can imagine now. We couldn’t imagine the Web before it happened and it’s only a hint of what is possible.
The Internet is about much more than the Web and social interactions. It’s basic infrastructure. We have yet to fully understand how to design systems once we need only define the relationship and not worry about the wires between.
When I see the FCC attempting to build a new special network for emergencies I can only look on in sadness because it shows a total inability to understand the concept of the Internet. By building a special network for that purpose we will find ourselves helpless in an emergency because the system depends on its specialness and thus others can’t contribute. The workers themselves will be unable to buy gear they need because the prices will be too high and the choices will be too limited. And worse, we’ll have another transport that must be funded by charging for transport. And, as we know, that’s not really a viable business.
It’s another face of QoS – by assuming scarcity we have to build a special network for this purpose because we assume that’s the only way to assure availability. In practice it does just the opposite by isolating the workers leaving them unable to communicate. Even to the extent it worked it creates the very isolation that cost so many lives on 9/11.
There is a cruel irony here – we fear decentralized organizations as the enemy yet we seem intent on creating new brittle organizations even though we know how to create distributed solutions that allow everyone to contribute.
Today the Internet seems vast yet it’s still using only a small portion of the capacity available. Once we are no longer carving out special capacity for traditional telecom and stop segregating wired from wireless bits we will find that we have gigabits and even terabits available everywhere with or without wires. It doesn’t take much imagination because the technologies already exist.
The great philosopher Pogo reminded us that we have met the enemy and it is us. The problem is our failure to understand the technologies that define our lives and our fear of escaping the status quo.
SELECTED REFERENCES
My other writings are posted at http://www.frankston.com/public.
A few of the relevant selections
- On the physical infrastructure as fixed asset; http://www.frankston.com/?name=OurCFR
- On Moore’s law and marketplaces: http://www.frankston.com/?name=BeyondComputing
- Artificial Scarcity: http://www.frankston.com/?name=AssuringScarcity
- Connectivity: http://www.frankston.com/?name=Connectivity
- Compositing the Internet from the edge: http://www.frankston.com/?name=OurInternet
- The Internet as “Do It Yourself”: http://www.frankston.com/?Name=DIYConnectivity
- The Internet as a constraint: http://www.frankston.com/?name=Perspective
Others’ Writings:
- The Classic End-To-End Paper: http://www.dpreed.com/papers/EndToEnd.html
- John Waclawsky on QoS: http://www.mydoctorate.com/index.php?name=UpDownload&req=getit&lid=16