DRAFT

Understanding the Internet vs. Telecom DRAFT
It's difficult to understand the Internet if you look at it form the perspective of telephony because the goal is to create opportunity rather than to provide solutions.04-May-2020

Warning: This a first draft and still being edited.

The Facebook link for this draft is Internet Vs. Telephony . I plan to experiment in the future with including a link for follow-up discussion.

Today people think of the Internet in the telecom framing. That is part of the semantic loading of terms like “The Internet” and “networking” so I’ll try avoid using them though that is difficult because there is so woven into are language. This is especially true of words like “network” which are generic and allow us to easy talk past each other by using the same word with different implicit definitions.

Part of this is the original design point of a network-of-networks (the Inter-Network). Of course, we still interconnect facilities, but the architectural model has transcended that service-based framing to give us a vast commons. We won’t realize the potential of this commons as long as we view it through the lens of legacy telecoms focus on profitable pipes rather than creating value in the whole.

I want to focus on the difference between two mindsets. Telecom is based on a provider making promises such as guaranteeing that two dumb endpoints can be used to make phone calls. A provider owns the facilities necessary to implement such promises and the provider needs exclusive control of the facilities in order to assure the availability of the necessary resources.

There is a different approach that I've written about many times. It is the absence of anything more than making an effort, actually a sincere or best effort to help packets move to their destination without any knowledge of the meaning or value of the packets.

In the 1970s and 1980s that approach didn’t work well for traditional applications like telephony because there wasn’t enough capacity. Fortunately, there was no need to put in special support because there was already a purpose-built phone network for that. This allowed the Internet being smart and knowing about the applications. What one could do was use the internetwork to file transfer and exchanging email to work simply without a meter running. One just needed the identifier for the other endpoint and a path for sending packets without the need for any guarantees of delivery. The computing devices outside the facilities used could make up for it by resending. This was a fundamental and conceptual shift from depending on a provider to favoring rapid innovation by the users with their own computers. A simple innovation would be to accept that packets get lost and instead of resending the old packet just send fresh data. Even something as simple as the idea of accepting lost packets represented a challenge to traditional networking because the providers have no way of knowing whether a packet matters or not. As we’ll see, missing that simple point caused problems when providers tried to help by buffering packets to avoid losses.

A funny thing happened along the way. As usage increased so did the capacity of the Internet because each new application created a reason for generating more capacity and for improving performance. This is another way in which traditional fixed pie metaphors don’t work.

Traditional telecom carefully doles out provisioned capacity and scarcity leads to higher prices and more revenue. In fluid market the more demand there is the more incentive there is to increase supply. The web was transformational in giving nontechnical people a reason to embrace connectivity and demand for more capacity to residential as well as corporate users.

The additional capacity generated new applications and one day, voice calls over this unreliable network started to work. At first it was iffy but as the capacity increased it become so reliable people assumed it was guaranteed like the telecom model but that’s not true. And once there was sufficient capacity, we get video.

Why this works isn’t easy to understand by conventional reasoning according to the traditional model of layered services. The Internet is a technique for using existing facilities as a resource rather than as a layer. This allows us to repurpose existing telecom facilities. If you look at a wire you can’t tell that it’s no longer associated with telecom services but instead is just a raw packet medium with the applications being outside the network and in our devices.

Another reason is that the very design point of the Internet was to interconnect existing networks and it did that very well. I see it differently—through the lens of the maintaining the simple connectivity we get when we plug two devises into the local network. The two views can coexist to an extent because once have a subscription and ignore what happens then there is a hiccup or where your carrier isn’t available it seems to just work.

This is especially true within a home network. When I was at Microsoft, I wanted to make connectivity (AKA the Internet) an integral part of Windows by assuring every system came with IP (and TCP) preinstalled and capable of automatic configuration (using DHCP). This made it easy to buy a router and not have to do detailed configuration. Later, when Wi-Fi became available there wasn’t even a need to run a wire.

At the time you had to have a separate account for each IP device. The equivalent of buying a phone line for each computer because, well, that’s the way dialup worked. The idea of a house full multiple PCs, let alone cameras and other devices, wouldn’t have been feasible. No one would’ve complained because they wouldn’t have experienced having so many devices.

This is the recurring problem I face in trying to explain why I want open connectivity because today’s applications are tuned to today’s opportunity. How do I explain that I want to do things, to build connected systems, well before others can understand what I’m doing?

Those of us who can implement new applications without waiting for providers to implement them. This isn’t just the story of networking, it’s the story of what we can do with software. The Internet is really a byproduct of using software to repurpose and program around the limits (and favors) of existing telecommunications facilities.

The program-around is one of the real challenges. Not only do I not want to pay for favors I don’t need, those favors create problems. One of the classic ways improve traditional data networks is to add buffering along the path. But the Internet was able to scale because the intelligence outside the network could adapt to the dynamic state of the network. When the network is doing buffering, it is lying to the applications about the actual state of the and it wreaks havoc on applications. Instead of slowing down because of limited network capacity the applications rapidly fill the buffer and then find they may have to wait minutes for the buffer to clear!

Not only do we not need intelligence in the network, such intelligence is inherently problematic because it cannot know the intention of the applications because all the facilities carry are raw packets without any information about their purpose! The intelligences in the network is the way a provider adds value. If that attempt now reduces the value, then we have a problem.

Today we are getting used to gigabit or bust expectations. Value is associated with speed rather than with the opportunity to innovate that drove the innovation. It’s another “end of history” moment where broadband seems to be a solved problem and we’re ready to optimize it for the moment and that’s what a provider is very good at. This assumption that it’s about speed (or, if we want to be technical, capacity) is fed by problematic metaphors such as thinking of the Internet like a series of water pipes and each device is like an open tap. “Best efforts” is very different – it takes resources that were dedicated to a single application makes it easy to share – just the opposite of the waterflow model.

I often hear about the Tragedy of the Commons which is based on a story of farmers sharing a limited piece of land and each has a cow. The argument is that each has an incentive to overgraze. When I first read about it in a computer class it seemed to be a potent warning. In traditional telephony (and with 5G) each user is allocated a dedicated portion of the resources in order to assure reliable transport. But the Internet is very different, rather than big allocations (cows), packets which aren’t simply infinitesimal, they aren’t physical objects and the meaning is not inside the packets. The Internet Protocol is the technique by which we take the shared facilities and create abundance by being able to share and, just as important, cooperate in the sharing. The farmers can indeed have a fraction of a cow and those cows can take turns. The Transmission Control Protocols is a means of cooperating. Yet, as we saw above, if the network operator lies about the state of the commons we can’t cooperate.

The Internet is the antithesis of the water flow model, it’s the way we share and cooperate.

One of the more spectacular mismatches with the water model is that normally do not send any “content”. You send URLs. You don’t send a Wikipedia page – you send the URL for the page. A reference. Amazon doesn’t ship physical objects from Seattle – it sends SKU (Stocking Unit) numbers which are the URLs of the physical world.

And here too we see a parallel with our language works – we talk about things and reference our shared experience. We don’t start from scratch each time.

The assumption is that we just need to extend the broadband pipes everywhere and we’re done. Even if that were feasible (given the business model of telecom) it would be a tragic mistake to optimize for fat pipes as per the waterflow model. We are only at the very earliest stage of discovering what is possible outside of such a pipe model.

It’s interesting to think about how we could have today’s highly interactive web at slower, even dialup speeds. We already knew the answer – local computation. That’s what made PCs so exciting. Dan Bricklin was able to craft a very interactive experience without any need for mainframe connectivity by taking advantage of the early PCs. In the last 1990s the Curl language was intended to bring local computing to the web, but adoption was slow. Today we have web apps which run locally. Without the need for many round trips to a server we can have a highly interactive experience. Locally.

Streaming started to take hold well before today’s gigabit connections. In 2004 Lucent claimed we needed a control plane for multimedia, but their stock fell 90% once people realized we already had multimedia. Now 5G is making similar claims.

Today we are up against the limitations of the economic model of telecommunications is based on the provider paying for infrastructure from the revenue it gets by selling services. Now that applications like Voice over IP (VoIP) are implemented in our devices the ability seek rents by limit connections only to subscribers is the remaining source of funding – one that doesn’t work very well. When we no longer require reliable deliver the facilities owner no longer adding value beyond their ability to control the path. That need for a subscription means that devices only interconnect after someone has paid and is maintaining the subscription. This is a fatal problem because devices we walk around with can’t know there is a toll collector (or rent seeker) in the path. Collecting a rent is not, in itself, evil. But in this case, it prevents connectivity and that is a fatal barrier.

While we have a single subscription for the entire house (because of home networking) we are still suck in the dialup model for wireless with an account for each device. No wonder mundane health care has been stuck in neutral. Even the simplest applications can require negotiating with hundreds of carriers around the world or depend on brittle pairing with each set of devices. There no necessity for that complexity but for the legacy business model of thinking of telecom as a series of billable services even if we no longer need services like phone calls and even if the smarts in the network get in our way.

This economic model drives 5G which is primarily about creating value inside the network. The rationale is akin to the idea that’s all about faster and fatter pipes. This is why we see remote surgery touted. But the ability to monitor health needs very little capacity and doesn’t need highly reliable capacity. What it needs is simple connectivity, so packets are not blocked by paywalls. It needs the ability to assume connectivity everywhere so that any capacity added supports all applications. This is what happened with VoIP – the capacity created for the web enabled the unanticipated growth of voice and video conversations. This is the real magic of best efforts connectivity – enabling applications we can’t anticipate.

Today we lament the lack of universal “broadband”. Alas, that very term makes it difficult to implement rational policies because embodies the fat pipe assumption. The reason we don’t have more broadband [sic] is that the economic model doesn’t work. The fatter pipe might enable new applications, but the pipe owner doesn’t get the additional revenue. In fact, the term broadband comes from the cable TV companies relaying the over-the-air broadband signals. They get their revenue by selling that content. As we start to use the raw capacity of the pipe for commodity packets, the provides lose their content revenue and the money that a Netflix or Zoom generates does not go to paying for that pipe.

One solution is for the FCC to “solve” the problem with 5G and by giving the pipe owners (providers) more control in order to force a Netflix to pay their share of the pipe. That is a terrible idea for a number of reasons. One is simple in that any accounting model is based on the false waterflow analogies. It also maintains a system of high prices by preventing the market forces taking advantage of the ability to treat all wires and radios as an abundant common facility. Instead we pay a high price for redundant broadband connections. And it gives the provider to the ability decide what applications work and which ones don’t. Standard network practices work at cross purpose with innovation outside the networks.

I again I have to mention 5G because the claim that you need a special network for each application puts the drivers in the position to choose winners and losers based on the value to the provider rather than letting customers decide what is most valuable. The few bits necessary for critical medical information doesn’t generate as much revenue as entertainment that justifies the need for filling pipes.

The alternative is simple – adopt a funding model in which we pay for facilities and not network services. The providers are no longer competing with their users because the users own the facilities and don’t need to capture the value inside the wire. This means we can provide universal connectivity and we can provide 100% coverage.

In 1997 I proposed that we replace the telephone line cards which, back then supported either dialup or DSL with ones that used electronics to do both. If your home model detected modern line code it would have immediately boosted speeds from 56Kbps to 1 to 10Mpbs. The cost would have been $50 to $100 house for universal broadband using existing facilities. Sure, gigabit fiber is nice and having a taste of universal connectivity would’ve led people to see the value in connectivity and we’d have that capacity today – actually a lot more – but at a vastly lower cost by taking advantage of the ability to share facilities.

But we do not need high capacity to discovery value. It’s OK to have a low performance radio mesh to extend forest monitoring. Having higher performance is nice but secondary to the ability to assume simple connectivity. Once we get a taste of what is possible, we can choose to add more capacity … or not. It’s up to us to take advantage of opportunities rather than being limited to a provider who must second-guess what we are doing in order to capture value.

The ability to treat the Internet as infrastructure (once we have subscriptions in place) is what makes the Internet so transformational. We can assume connectivity and focus instead on what we do with the infrastructure But will fail to realize the full value as long as we focus on broadband as service from a provider or, more to the point, from multiple providers being sold only to those who subscribe.

The future is a Public Packet Infrastructure – a vast commons in which we can cooperate and share rather than having dedicated facilities (such as radio spectrum bands) that require excluding all others while they are in use and which require preventing people from communicating unless they pay and are within their limits and have the right carrier in the right place. And even then, they are limited by being unable to extend connectivity themselves. Adding capacity today is called stealing!

We need to embrace the powerful ideas that gave us the Internet and move ahead rather than assume today’s Internet is anything more than a hint of what is possible in the future.

Notes

This is a place to accumulate feedback and ideas.

  • Need to tell the story about applying these ideas to sharing resources like a single wire.

Other Readings