DRAFT

UnderstandingInternetVsTelecom UIT It's difficult to understand the Internet if you look at it form the perspective of telephony because the goal is to create opportunity rather than to provide solutions.

Understanding the Internet vs. TelecomDRAFT
It's difficult to understand the Internet if you look at it form the perspective of telephony because the goal is to create opportunity rather than to provide solutions.
06-Mar-2021Version 2: 2021-05-07 08:39:14

Today people think of the Internet in the telecom framing. That is part of the semantic loading of terms like “The Internet” and “networking,” so I’ll try to avoid using them though that is difficult because they so woven into our language with semantic loading. This is especially true of words like “network,” which are generic and allow us to easily talk past each other by using the same word with different implicit definitions.

Part of this is the original design point of a network-of-networks (the Inter-Network). Of course, we still interconnect facilities, but the architectural model has transcended that service-based framing to give us the opportunity to innovate – a vast commons open to innovation. We won’t realize the potential of this commons as long as we view it through the lens of legacy telecom’s focus on profitable pipes rather than creating value in the whole.

I want to focus on the difference between the two mindsets. Telecom is based on a provider making promises, such as guaranteeing that two dumb endpoints can be used to make phone calls. A provider owns the facilities necessary to implement such promises, and the provider needs exclusive control of the facilities to assure the availability of the necessary resources.

There is a different approach that I’ve written about many times. It is the absence of anything more than making an effort, actually a sincere or best effort to help packets move to their destination without any knowledge of the meaning or value of the packets.

In the 1970s and 1980s, that approach didn’t work well for traditional applications like telephony because there wasn’t enough capacity. Fortunately, there was no need to put in special support because there was already a purpose-built phone network for that. This allowed the Internet to avoid being smart and knowing about the applications. What one could do was use the internetwork to transfer files and exchange email without a billing meter running One just needed the identifier for the other endpoint and a path for sending packets without the need for any guarantees of delivery. The computing devices outside the facilities used could make up for it by resending.

This was a fundamental and conceptual shift from depending on a provider to favoring rapid innovation by the users with their own computers. A simple innovation would be to accept that packets get lost, and instead of resending the old packet, just send fresh data. Even something as simple as the idea of accepting lost packets are a challenge to traditional networking because the providers have no way of knowing whether a packet matters or not so they can’t recover them. A standard cellular system maintains the context in the form of a reserved path and guarantees delivery. With the Internet there are just raw packets without context.

As we’ll see, missing that simple point caused problems when providers tried to help by buffering packets to avoid losses. The right strategy is to drop extra packets because the application would handle it. Instead, by buffering, the pretend there is more capacity than there is and introduce delays forcing the application to wait rather than letting it adopt smarter strategies.

A funny thing happened along the way. As usage increased, so did the capacity of the Internet because each new application created a reason for generating more capacity and for improving performance. This is another way in which traditional fixed pie metaphors don’t work.

Traditional telecom carefully doles out provisioned capacity, and scarcity leads to higher prices and more revenue. In a fluid market, the more demand there is, the more incentive there is to increase supply. The web was transformational in giving nontechnical people a reason to embrace connectivity and demand for more capacity to residential as well as corporate users.

The additional capacity generated new applications, and one day, voice calls over this unreliable network started to work. At first, it was iffy, but as the capacity increased, it became so reliable people assumed it was guaranteed like the telecom model, but that’s not true. And once there was sufficient capacity, we get video.

Why this works isn’t easy to understand by conventional reasoning according to the traditional model of layered services. The Internet is a technique for using existing facilities as a resource rather than as a layer. This allows us to repurpose existing telecom facilities. If you look at a wire, you can’t tell that it’s no longer associated with telecom services but is just a raw packet medium with the applications being outside the network and in our devices.

Another reason is that the design point of the Internet was to interconnect existing networks. It does that very well. I see it differently—through the lens of maintaining the simple connectivity we get when we plug two devices into the local network. The two views can coexist to an extent because once you have a subscription and ignore what happens, then there is a hiccup or where your carrier isn’t available, it seems to just work.

This is especially true within a home network. When I was at Microsoft, I wanted to make connectivity (AKA the Internet) an integral part of Windows by assuring every system came with IP (and TCP) preinstalled and capable of automatic configuration (using DHCP). This made it easy to buy a router and not have to do a detailed configuration. Later, when Wi-Fi became available, there wasn’t even a need to run a wire.

In the 1990s, you had to have a separate account for each IP device. The equivalent of buying a phone line for each computer because, well, that’s the way dialup worked. The idea of a house full of multiple PCs, let alone cameras and other devices, wouldn’t have been feasible. No one would’ve complained because they wouldn’t have experienced having so many devices.

This is the recurring problem I face in trying to explain why I need connectivity. Today’s applications are tuned to today’s opportunity. How do I explain that I want to do things, to build connected systems, well before others can understand what I’m doing?

Those of us who can, implement new applications without waiting for providers to implement them. This isn’t just the story of networking; it’s the story of what we can do with software. The Internet is really a byproduct of using software to repurpose and program around the limits (and favors) of existing telecommunications facilities.

Program-around is one of the real challenges. I not want to pay for favors that I don’t need. Those favors create problems. One of the classic ways to improve traditional data networks is to add buffering along the path. But the Internet was able to scale because the intelligence outside the network could adapt to the dynamic state of the network. When the network is doing buffering, it is lying to the applications about the actual state of the facilities, and it wreaks havoc on applications. Instead of slowing down because of limited network capacity, the applications rapidly fill the buffer and then find they may have to wait minutes for the buffer to clear!

Not only do we not need intelligence in the network, such intelligence is inherently problematic because it cannot know the intention of the applications because all the facilities carry are raw packets without any information about their purpose! The intelligence in the network is the way a provider adds value. If that attempt now reduces the value, then we have a problem.

Today we are getting used to gigabit or bust expectations. Value is associated with speed rather than with the opportunity to innovate that drove innovation. It’s another “end of history” moment where broadband seems to be a solved problem, and we’re ready to optimize it for the moment, and that’s what a provider is very good at. This assumption that it’s about speed (or if we want to be technical, capacity) is fed by problematic metaphors such as thinking of the Internet as a series of water pipes, and each device is like an open tap. “Best efforts” is very different. It takes resources that were dedicated to a single application and makes it easy to share—just the opposite of the water flow model.

I often hear about the Tragedy of the Commons (TotC), which is based on a story of farmers sharing a limited piece of land, and each has a cow. The key assumption is that the farmers are unable to cooperate, and cows can’t be shared. Very simply, the Internet is the technique for sharing, and we don’t have to allocate chunks for individual applications. The Internet is the answer to the TotC.

One of the more spectacular mismatches with the water model is that applications normally you do not send a copy of the content. They send URLs. You don’t send a Wikipedia page – you send the URL for the page. A reference. Amazon doesn’t ship physical objects from Seattle – it sends SKU (Stocking Unit) numbers, which are the URLs of the physical world.

And here too, we see a parallel with our language works – we talk about things and reference our shared experience. We don’t start from scratch each time.

The assumption is that we just need to extend the broadband pipes everywhere, and we’re done. Even if that were feasible (given the business model of telecom), it would be a tragic mistake to optimize for fat pipes as per the water flow model. We are only at the very earliest stage of discovering what is possible outside of such a pipe model.

It’s interesting to think about how we could have today’s highly interactive web at slower, even dialup speeds. We already knew the answer – local computation. That’s what made PCs so exciting. Dan Bricklin was able to craft a very interactive experience without any need for mainframe connectivity by taking advantage of the early PCs. In the late 1990s, the Curl language was intended to bring local computing to the web, but adoption was slow. Today we have web apps that run locally. Without the need for many round trips to a server, we can have a highly interactive experience. Locally.

Streaming started to take hold well before today’s gigabit connections. In 2004 Lucent claimed we needed a control plane for multimedia, but their stock fell 90% once people realized we already had multimedia. Now, 5G is making similar claims.

Today we are up against the limitations of the economic model of telecommunications. Providers can no longer fund the facilities by selling services. Now that applications like Voice over IP (VoIP) are implemented in our devices, the ability rent-seek by limiting connections only to subscribers is the remaining source of funding – one that doesn’t work very well. When we no longer require reliable delivery, the facilities owner is no longer adding value beyond their ability to control the path. That need for a subscription means that devices only interconnect after someone has paid and is maintaining the subscription. This is a fatal problem because devices we walk around with can’t know there is a toll collector (or rent seeker) in the path. Collecting a rent is not, in itself, evil. But in this case, it prevents connectivity, and that is a fatal barrier.

While we have a single subscription for the entire house (because of home networking), we are still stuck in the dialup model for wireless with an account for each device. No wonder mundane health care has been stuck in neutral. Even the simplest applications can require negotiating with hundreds of carriers around the world or depend on brittle pairing with each set of devices. There no necessity for that complexity but for the legacy business model of thinking of telecom as a series of billable services even if we no longer need services like phone calls and even if the smarts in the network get in our way.

This economic model drives 5G, which is primarily about creating value inside the network. The rationale is akin to the idea that’s all about faster and fatter pipes. This is why we see remote surgery touted. But the ability to monitor health needs very little capacity and doesn’t need highly reliable capacity. What it needs is simple connectivity, so packets are not blocked by paywalls. It needs the ability to assume connectivity everywhere so that any capacity added supports all applications. This is what happened with VoIP – the capacity created for the web-enabled the unanticipated growth of voice and video conversations. This is the real magic of best efforts connectivity – enabling applications we can’t anticipate.

Today we lament the lack of universal “broadband.” Alas, that very term makes it difficult to implement rational policies because it embodies the fat pipe assumption. The reason we don’t have more broadband [sic] is that the economic model doesn’t work. The fatter pipe might enable new applications, but the pipe owner doesn’t get the additional revenue. The term broadband comes from the cable TV companies relaying the over-the-air broadband signals. They get their revenue by selling that content. As we start to use the raw capacity of the pipe for commodity packets, the providers lose their content revenue, and the money that a Netflix or Zoom generates does not go to paying for that pipe.

One solution is for the FCC to “solve” the problem with 5G and by giving the pipe owners (providers) more control so that they can force a Netflix to pay their share of the pipe. That is a terrible idea for a number of reasons. One is simple in that any accounting model is based on the false water flow analogies. It also maintains a system of high prices by preventing the market forces from taking advantage of the ability to treat all wires and radios as an abundant common facility. Instead, we pay a high price for redundant broadband connections. And it gives the provider the ability to decide what applications work and which ones don’t. Standard network practices work at cross purposes with innovation outside the networks.

I again I have to mention 5G because the claim that you need a special network for each application puts the drivers in the position to choose winners and losers based on the value to the provider rather than letting customers decide what is most valuable. The few bits necessary for critical medical information don’t generate as much revenue. This is in sharp contrast with video entertainment that justifies the need for filling pipes.

The alternative is simple – adopt a funding model in which we pay for facilities and not network services. The providers are no longer competing with their users because the users own the facilities and don’t need to capture the value inside the wire. This means we can provide universal connectivity, and we can provide 100% coverage.

In 1997 I proposed that we replace the telephone line cards which, back then, supported either dialup or DSL with ones that used electronics to do both. If your home model detected a modern line card, it would have immediately boosted speeds from 56Kbps to 1 to 10Mpbs. The cost would have been $50 to $100 house for universal broadband using existing facilities. Sure, gigabit fiber is nice and having a taste of universal connectivity would’ve led people to see the value in connectivity, and we’d have that capacity today – actually a lot more – but at a vastly lower cost by taking advantage of the ability to share facilities.

But we do not need high-capacity for all applications. It’s OK to have a low-performance radio mesh to extend forest monitoring. Having higher performance is nice but secondary to the ability to assume simple connectivity. Once we get a taste of what is possible, we can choose to add more capacity … or not. It’s up to us to take advantage of opportunities rather than being limited to a provider who must second-guess what we are doing to capture value.

The ability to treat the Internet as infrastructure (once we have subscriptions in place) is what makes the Internet so transformational. We can assume connectivity and focus instead on what we do with the infrastructure But will fail to realize the full value as long as we focus on broadband as service from a provider or, more to the point, from multiple providers being sold only to those who subscribe.

The future is a Public Packet Infrastructure – a vast commons in which we can cooperate and share rather than having dedicated facilities (such as radio spectrum bands) that require excluding all others while they are in use and which require preventing people from communicating unless they pay and are within their limits and have the right carrier in the right place. And even then, they are limited by being unable to extend connectivity themselves. Adding capacity today is called stealing!

We need to embrace the powerful ideas that gave us the Internet and move ahead rather than assume today’s Internet is anything more than a hint of what is possible in the future.

Other Readings