Open Internet
Name: Bob Frankston
Open Internet Proceeding: 14-28
URL: http://RMF.VC/ConnectivityPolicy201407
Email: ConnectivityPolicy201407@bob.ma
Contents
- Open Internet
- Introduction
- Executive Summary
- Overview
- An Appropriate Marketplace
- Learning from Castle Village
- The Residential Gateway Circa 1995
- Connectivity Starts at Home
- DIO Infrastructure for Connectivity
- Compare with Today’s Policies
- Case Study: Connected Healthcare
- Taking Advantage of Opportunity
- A Powerful Idea; Borderless Connectivity
- A Powerful Idea: Borderless Connectivity
- Policies
- Network Neutrality
- Comcast/TWC
- Opportunity Zones
- Structural Separation
- Municipal Broadband
- Spectrum Auctions and all that
- IP Transition
- Security and Privacy
- Better, Rather than Smart Cities
- Impact on existing businesses
- Appendix: Language & Meaning
- Readings
Introduction
This white paper is based on a talk I gave to the Columbia University CITI policy group on June 2nd 2014. This is an expanded version of the points I made and includes some of the issues raised in the Q&A.
This is a work-in-progress. I plan later version and/or new papers that will explore related issues in more detail.
This paper is aimed at telecommunication policy specialists. While the FCC can play a leadership role in setting policy its regulations are framed in the status quo of telecommunications. The legacy telecommunications industry is based a particular way to use our wires and radios to communicate. The assumption is that communications is a service from companies that own the facilities and are exclusive providers of services like telephony and video delivery. The Internet is a fundamentally different way to use the same facilities. We create the services ourselves using any available resources.
This is a disruptive change that requires thinking very different about what it means to communicate. Thus I expect that change will come from the local efforts (as in apartment houses and other communities) rather than being imposed by regulators. I use the term DIO or “Do It Ourselves” to describe how the Internet is the result of people working to find ways to communicate using their intelligent devices rather than being dependent on third parties.
Once new paradigms are accepted the FCC (or its successor) can adapt to the new landscape.
Those who want to first get a detailed explanation on how the Internet does (and does not) work can jump ahead to the Appendix explaining today’s Internet.
Executive Summary
The goal of public policy for connectivity should be to assure access to our common facilities as a public good by adopting sustainable business models that don’t put owners and users at odds with each other. Such balances are typically difficult to achieve which is what makes connectivity so unusual – we can achieve both once we fund the facilities as a public good apart from the particular applications such as telephone calls and cable content.
The Internet represents a discontinuity from the past and our policies need to reflect this fresh start. We can now frame policies in terms of creating opportunity. It’s not a thing in the sense of the wires and the gears – it is what we do with them.
The Internet is a result of an idea – our new understanding of how to use available resources to communicate without depending on telecommunications providers to act on our behalf and without depending on them to make a profit before we can talk.
Perhaps it’s the stark simplicity of the idea that makes it so difficult to grasp. One must think architecturally and understand that we are exchanging packets totally apart from their interpretation (meaning) and their value. The web is just an application using this connectivity and is not the infrastructure in itself.
The key to approaching the new policy is to focus on providing the facilities to exchange packets apart from any particular application. If we go to the basics, legacy telecommunications is a way we’ve used the wires (and radios) to speak among ourselves and, now, among our devices.
This idea of separating the business of transporting packets is not new. We’ve seen it called structural separation and it’s also implicit in spectrum auctions. But rather than a wholesale/retail model the facilities we use to exchange the packets must be funded as a whole because it is a public good.
The Internet does not depend on circuits (or, as they are sometimes called, pipes) but rather each packet can find its own way and it’s OK if a few get lost. Instead of assuring that any particular application works we have to discover what we can do with whatever facilities are available.
This requires a break from the FCC’s approach of assuring results, to an approach in which communities pay for a common infrastructure. Today’s telecommunications policies presume (and thus preserve) scarcity. By having vibrant competition in companies seeking to offer communities the best of available facilities we create the conditions for hyper-growth (AKA Moore’s Law).
When we talk about scarcity or abundance we have to ask scarcity or abundance of what? It’s a measure relative to a particular purpose or use. A glass of water is too little for watering a lawn but can be plenty to slake a thirst.
The secret to Moore’s law is to take advantage of opportunities even if they don’t apply to the problem at hand. Thus if we have an infrastructure that supports exchanging files (and email) rather than voice conversations we do just that. And, in the 1990’s we discovered that the capacity generated for the web now enabled voice as new opportunity even though we hadn’t built into the network. Contrast that with PSTN (Public Switched Telephone Network) which was designed for voice and now we have to transition away from it because that’s all it’s good for.
This abundant capacity is the starting point and allows us to discover what we can do with this new resource ranging from simply allowing us to do today’s applications (like the web) to connected healthcare and so much more.
It isn’t easy to embrace an entirely new idea and this brief introduction won’t fully explain it. This is why we need examples. We can build on the experiences with networking in our homes which are DIY (or Do It Yourself). We can take this to next step by starting with communities that already work together as in apartment houses with borders that can act on behalf of the community. I call this DIO or Do It Ourselves.
The scope of DIO can start very locally and expand out as more join in. But even those nascent efforts are fully connected with the rest of the world just as the devices on your home network are connected. The key is in recognizing that the entire telecommunications infrastructure is simply a readily available resource.
We call the service “broadband”, but instead of thinking of it as the Internet being delivered we can start with our devices and treat broadband as simply a way of buying the use of existing capabilities. In the home the boundary between DIY and broadband is the edge of the home. In a DIO apartment house it’s at the edge of the building. And when neighboring buildings join together the boundary of DIO extends until it reaches the entire city and beyond.
These DIO efforts setup a powerful and viral example.
The FCC should work with this process rather than trying to preserve the regulatory system we have now. It just needs to understand that the Internet has shown us new ways to use existing facilities.
This is a very simple approach which will give us a market-based sustainable approach. Public policy issues like network neutrality will be a result rather than something we have to micromanage. We will be exchanging packets of bits rather than transporting content. The facility operators won’t know which packets should get better treatment because the packets are decoupled from their meaning.
Overview
The Internet poses a challenge to regulators because it is a fundamentally different concept from telecommunications. A defining assumption of telecommunications is that value is created by a network operator. The operator owns the facilities necessary for the particular services. The intelligence resides within the network either in the skills of the operator or the gear which implements the services.
It used to be very hard to make long distance telephony work. Analog telephony is like the game in which people whisper to each other. After a few hops the slight misunderstandings would accumulate and you wouldn’t understand what the original message was.
Digital was a breakthrough – all you have to preserve are zeroes and ones. You knew it had to be one or the other so instead of the signal drifting it would get regenerated at each step. Digital signaling was a breakthrough that benefited the phone companies by making their task much easier.
It also had an unanticipated consequences. An upstart like MCI could provide an alternative path for the conversation and undercut ATT’s business model to the point that ATT had to try to reinvent itself by divesting itself of its retail business so it could focus on what it thought was the profitable business of wholesale telephony. The changes were far deeper – the same technologies also gave us digital computing and shifted value (intelligence) from inside the network to our devices outside of the carriers’ networks. We could do more than better telephony, we could reinvent the world. And, indeed, we have reinvented the world.
A Discontinuity.
It’s too easy to see today’s Internet as the next step in the refinement of earlier technologies because, on the surface, it can be used for the same kind of services as today’s telecommunications. We simply substitute IP for ATM (Asynchronous Transfer Mode) and continue business as usual. ATM is a protocol that allows a carrier to manage the capacity assigned to each circuit while IP doesn’t even have the concept of circuits let alone the ability to make promises like SLAs (Service Level Agreements).
It is normal to see new technologies in terms of the old but an auto-mobile isn’t just a carriage without a horse. And a highway isn’t just a railroad without tracks. Together they give you the freedom do go where you want rather than be limited to destinations that are profitable to the owner of the tracks. Unlike railroads, the road system isn’t tied to any particular business model. In fact we can use any path or sidewalk or waterway or whatever to get around. It’s up to us, everyone and anyone, to discover what is possible.
This is why we have a Department of Transportation managing some of the facilities rather than an Interstate Commerce Commission which presumes a particular business model.
The Internet is not a digital telecommunications system. It a product of the new digital computing technologies which allow us to create our own solutions just like we drive our own cars using any route available. Roads make it easier but we can also drive off-road if we choose the appropriate vehicles.
Analogies are far from perfect but you can think of Internet packets as navigating their way through routers like cars read road signs.
We must be very careful in understanding the future of connectivity for today’s Internet is still at the horseless carriage stage of its evolution. The current protocols and business models have repurposed the practices and technologies of telecommunications. The future of connectivity is going to evolve very fast once we get past the regulatory regimen that keeps us within the narrow confines of today’s telecommunications infrastructure.
Embracing this future will require thinking differently and accepting risk. Video over IP is a good example – we didn’t build it into the network but discovered it could work as a byproduct of the increased capacity created providers meeting the demand for “more web” (transport capacity).
If we had required video be built in as a service we wouldn’t be able to afford it. In fact, Picture Phone failed for just that reason. By using the telecommunications as a resource rather than depending on providers we can discover possibilities that we couldn’t have imagined just a few years ago!
It’s Just Business
The business of telecommunications is based on the very simple idea that the carrier is adding value by carrying messages (like telegrams) and preserving signals (like voice). One can judge the value of a message and charge for it and apply that fees collected towards maintaining the infrastructure.
This sort of works but the high capital costs and limited ability to differentiate offerings led to the creation of the ICC for railroads and the FCC for telecommunications.
To understand how intelligence is entirely outside of any network think about two computers with radios that can send packets to each other. It doesn’t have to be Wi-Fi – any radio will work as long as they agree on the signaling technique. If anything goes wrong and a packet gets lost then the software can compensate, retransmit or fill in the blanks. Unlike traditional analog radios where a dropout will cause an audible click, the software can easily glide past such disruptions. There is no network operator providing that service!
That’s fine as long as the computers are close together but it’s easy to extend this by having a computer between the two that can relay the messages. It doesn’t really matter whether the packets are relayed with radios or with the assist of a wire or fiber.
All that matters is that some of the packets get through! If only a few do then we can do low-demand applications like email and text messages. If a lot get through we can do voice and even video! It’s that simple and that very simplicity may be why it is so hard to understand. But it is also that simplicity that has made the Internet so resilient.
In this context an Internet Packet or IP is simply a common language we use for packets but isn’t the only option. And TCP (Transmission Control Protocol) is nothing more than a convention for exchanging messages over a distance. What is amazing is that TCP is a way we cooperate without any central controllers. If the packets backup it slows down the messaging so we can share what is available.
Unlike the Tragedy of the Commons we have an abundant commons because the benefits of cooperating are shared and immediate! This is related to the network effect in which the more people participate the more value accrues to all parties. To the extent that we use shared facilities we contribute to the commons more than we take from it.
One recurring theme is that we create solutions using software but software is really just our knowledge coded in a way that makes it easy to share. If someone discovers a shortcut that saves an hour of walking then we can all benefit by sharing the knowledge.
Deciding how much knowledge we, as a society, keep for our own benefit and the knowledge we share is always a balancing act. The success of the web is a dramatic testament to how we can all benefit from sharing. Tim Berners-Lee lives in a far better world than he would have if he’d tried to keep others from profiting from the web.
While it’s useful to understand the power of this concept it is sufficient, for the purposes of this paper, to recognize that if value is created outside of the common facilities then we can’t fund it by having an owner who must make a profit.
As TCP demonstrates we can treat the facilities we use as a public good and share the commons without having to limit speech merely so a provider can make a profit. Especially when the provider is no longer providing the services. A phone conversation over IP doesn’t even exist as such within the network!
Why this Matters
To use one example from the talk – connected healthcare. It would be very simple for someone to carry a glucose monitor that calls for help in an emergency. The problem is that today’s policy require that every intermediary make a profit from passing on the messages and assuring that each relationship is authorized. In practice this doesn’t work – one can make it work in special cases but those are the exceptions.
Once we can assume connectivity the benefits of just that one application could justify the costs of the communications infrastructure.
To put this in perspective – we spend trillions of dollars on highways. By comparison the cost of radios and fibers is nil, especially when it is installed while building new roads and we already have open trenches. It should be very easy to justify paying for the common infrastructure out of what amounts to petty cash.
But the political will is not there. Fortunately we can effect change by cooperating at the edge just like we did with the Internet by starting with local networks and then interconnecting them.
An Appropriate Marketplace
The FCC faces a challenge in applying policies appropriate for intelligent networks to the new reality in which the intelligence is entirely outside of networks.
It makes no more sense to make each wire a profit center than it does to make each square of sidewalk pavement profitable in isolation. The idea of competing broadband wires is also problematic because if all we need is the best efforts transport of packets then it makes even less sense to have multiple facilities than it does to have multiple electric grids.
This may be not be obvious because today’s providers make money selling their own services, most notably as “cable” TV providers. The business of cable TV is very much in the mold of traditional telecommunications in which each provider has its own infrastructure. But OTT (Over-The-Top) services demonstrate that this is no longer necessary. The content can be delivered over the Internet. In fact, Verizon, for one, does their Video on Demand over IP today.
Netflix, Hulu and others do very well without their own infrastructure. That infrastructure is owned by companies like Comcast which view them as not just a competitor but an existential threat. This is very different from a situation as we see with Samsung and Apple. One part of Samsung competes with Apple and another supplies chips. In this case, though, Comcast can act as a gatekeeper without effective alternatives for Netflix.
The problem is not simply that today’s providers are competing with their customers. While we can address some of that by separating the ownership of infrastructure from the ownership of content (structural separation) we need to go further and fund the infrastructure as a public good rather than as a series of wholesale pipes.
Fortunately we don’t have to design an entirely new approach but instead can learn from what is already happening in the marketplace.
In the 1990’s AOL divested itself of its network infrastructure so it could focus on making money using others’ networks. And more recently Time-Warner spun off Time-Warner Cable for the same reason. Comcast also shifted its focus from owning the cable to becoming a content vendor by buying NBC Universal.
Learning from Castle Village
Low Cost “Internet”
Castle Village is a co-op on Cabrini Boulevard in Manhattan (New York City). There are 600 apartments in five towers. The governing board decided that they could save money and provide Wi-Fi in the common areas by making Internet connectivity part of the packet just like halls and other common facilities.
Each apartment pays $10 which covers the amortized cost of equipment, maintenance and the cost of connecting to the rest of the Internet.
Each apartment gets a 100Mbps (Megabits per second) connection with the entire building sharing 200Mbps on a fiber that is provisioned for up to 1GBps (a gigabit or 1000 Megabits per second).
Panix provides the Internet connection and also supports the buildings connectivity. Perry Metzger (a Castle Village resident) s an experienced Internet expert and helps manage the facilities as well as the relationship with Panix.
While the original goal was simply “cheap Internet” once the tenants in the building can assume connectivity they discover new uses. For example, when a package arrives the doorman sends a picture of the package to the tenant. This hints at larger possibilities that we’ll explore below.
Drilling Down
The ratio of 100Mbps/apartment to 200Mbps for 600 apartments may seem extreme but it hints at some of the ways we have to think differently to understand how the Internet works and how to understand the costs.
Let’s start with that 100Mbps per apartment. It is far higher than today’s typical Internet connection yet it is artificially low because the routers in the building can be set to provide a gigabit per apartment. If each apartment indeed had a gigabit it would make essentially no difference!
This is it’s easy for carriers to claim to offer gigabit service. Today’s users just use a few megabits but by advertising a meaningless upper bound on the capacity they divert our attention form the fact that we are paying a monthly fee when there is a very low ongoing cost. If the community borrowed money to pay for their fiber they would’ve been able to pay off that loan in the first year and then would have abundant capacity within the community just like Castle Village has within their buildings.
If every apartment watched Netflix at the same time that would mean 2Mbps per apartment or 1.2Gbps in all. This doesn’t happen because the families in the building still have cable subscriptions. But if they did all switch to Netflix and its ilk the cost of the additional capacity would be a relatively small incremental cost because that $10/month already includes operational costs and the fiber that connects already has a gigabit capacity. And any increase would be small compared with the savings the users get by not paying the cable for “Internet”.
But there might not be any need for additional capacity because Netflix and others can cache the video content locally as Akamai does. Given how inexpensive storage is a few terabytes of capacity have a one-time cost of a few hundred dollars!
This only hints at how different the economics of connectivity are from traditional telecommunications. Rather than thinking in terms of the cost of the service we can think of one time costs for the disk space and fiber and then have the ability to act as owners using the facilities as we wish without an ongoing fee because we are creating the solutions ourselves using the intelligence in our devices.
Think of the difference in paying once for a copper or a radio compared with today where we pay a provider simply to use a facility. Why are we forced to pay a monthly fee when we can own the wire?
Expanding
Other buildings near Castle Village are interesting in joining in and, in doing so, increase the area of connectivity further improving the economics. And as the footprint expands we get further economies of scale. The costs go down and the community expands.
Parents could watch their children in the playground without using any of the capacity pipe to the rest of the Internet. The larger the community the more applications can take advantage of local capacity without generating new usage costs.
That said, this emphasis on high capacity for video content is actually a side issue since the many valuable applications such as medical monitoring don’t require much capacity. The only reason I’m emphasizing video is to show that even in a worst-cost scenario this approach works very well.
Castle Village is just one example. It is very easy to replicate in other buildings and that is happening. We will be seeing connectivity growing as neighboring facilities start to interconnect directly with each other without a provider in the path collecting a rent and the balance will shift from to the point that connected communities are the norm.
As the footprint expands there is no limit to how large the area of common ownership can be. If we expand to the scale of a city the city services can take advantage of this common, and abundant, infrastructure.
But I’m getting ahead of myself.
The Residential Gateway Circa 1995
To better understand what is happening let’s go back to 1995 when the Residential Gateway was to be the future. At that time Broadband was seen as a fat pipe that terminated in a residential gateway at the customer’s premises (to use Telco lingo). Sprint ION was one example. The pipe would be owned by the provider and would be used to provide services such as telephony and video content (cable) as well as services such as meter reading and home monitoring. It was bidirectional so it could also provide electronic commerce. You may recognize some of this from current advertisements from the phone companies as they continue to seek to find new services they can sell.
The ideas had developed over the previous decades with information services such as Minitel in France acting as examples. One of the premier services was to be Interactive TV that allows the viewer to interact with TV programs such as answering questions and making purchases.
Telephone companies developed a technology called ADSL or Asynchronous Digital Subscriber Line in the 1980’s to deliver video at a few megabits for second and cable companies added an up-channel to their systems.
With the growth of the web the modems that were already part of the standard personal computer package became more important. Thanks to the accidents of history local phone calls were unmeasured for most users in the United States making it very inexpensive to dial up and stay online.
(US users expected detailed billing for phone calls so it was cheaper to charge a flat monthly for local calls. European PTTs weren’t required to do detailed billing so kept metering usage. Charging by the minute discouraged European users from staying online thus slowing adoption of online compared with the United States.)
The original phone company high speed Internet offering was meant to be just like the dialup service. You would use your local DSL connection to connect to an Internet provider just like you’d connect to a long distance provider. In fact the protocol, PPOE was modeled on dialup and you’d get your IP address assigned on each connection. Even today many FiOS users get a new address every time they restart their connection. And like dialup you’d be charged for each line you use.
In the cable TV world the model was the set top box where we pay a fee per month for each television.
Of course the gear would be owned by the provider just like it was in traditional telephony with a monthly fee for each service.
Connectivity Starts at Home
Shared Internet
In 1994 I was working at Microsoft and commuting from home. Because I lived in Boston Massachusetts and my office was in Redmond Washington I would usually work from home. Of course I had a LAN at home using the Ethernet technology I learned about in May 1973 when Bob Metcalfe spoke about what he called Ethernet. A very simple but powerful idea based on a radio packet network contained within a cable.
In order to share a single dialup connection among all my computers I used a technology called Network Address Translation (NAT) to connect my entire network with the rest of the Internet. A NAT makes the entire local network look like a single computer. I later used an ISDN connection to connect myself to the network in Redmond. Staying connected is a very different experience then dialing to the Internet for a particular purpose.
I was able to do this because I had control over the network in my house and could just do it myself. All I needed was the expertise to install and operate my own network. The secret is that I knew how simple an Ethernet is and I had friends who could help me understand how to connect my systems to the rest of the Internet.
When I had my own company in the 1980’s I installed an Ethernet and other companies began to install their own local networks.
When I joined Microsoft I often worked at home in Massachusetts because my office was far away in Redmond, Washington. Naturally I installed a LAN to interconnect the computers within my home. But even as late as 1995 it was unusual to have multiple computers at home. That seems quaint now, 20 years later.
Like others I had to use a dialup connection but rather than connecting a single computer I want to interconnect my home network with the rest of the Internet.
Because I was at Microsoft I was in the position to take what I’d done for myself and make it available to all by making it possible for people to operate their own home networks. Again, in 1995 operating a network seemed too complicated for most people. But because of my experience I knew it could be simple. I also realized that when people bought a new computers they would hand the old machine down so their children could use it for homework or, at the time to browse that new World Wide Web.
How To
The basic approach is very simple. By placing a NAT between the home network and the provider’s facilities the user only pays for a single connection rather than one per mission.
In 1995 Windows didn't come with IP built in so I had to make sure the right drivers were included automatically with every system and that they automatically configured themselves when interconnected.
This was before Wi-Fi so I also made sure there was a simple option to interconnect the machines using the existing phone wires – at the same time they could be used for dialing up or, later, DSL. The availability of Wi-Fi has made things much simpler.
Ownership
There is a big difference between networking as a provided service and owning ones network. For those who had the skills to do their own Ethernet a high speed connection was 10Mbps which seemed fast in the day. And networking gear would typically cost a few hundred dollars.
Once you’ve installed the gear the cost of operating the home network is $0/month – there are essentially no ongoing costs. Because people own their own wires and buy their own gear there is a very competitive market and we saw Moore’s law work within the home with speeds going from 10 to 1000Mbps for wired connections and up to a few 100 Megabits for wireless with costs going down to $10 for routers in some cases!
This contrasts sharply with traditional telecommunications in which networking is a service and costs have gone down very slowly and, in many cases, have actually increased!
This is a strong reason for the need for use to own our infrastructure either individually or collectively.
Not only have the costs come down but we now have IP printers and IP connected televisions and even light bulbs. That could not have happened if we had to pay a monthly fee for each device and connection!
Today’s gigabit networks only hint at future possibilities. Other cables, SATA, USB, and HDMI go much faster. These are all really just digital networks but because we don’t fully appreciate the possibilities of what I’m calling borderless connectivity we aren’t taking full advantage of the possibilities. But more on that below.
Broadband!
Carriers started rolling at their broadband services as I was doing home networking. The high speed connections (and even 1Mbps was considered high speed) seemed like too much for a single PC so when the providers approached Microsoft they were sent to me to connect to the home network.
Sometimes the timing is just right even if the providers didn’t fully understand the concept. In fact their terms of service prohibited sharing the connection! The packets themselves are totally decoupled from their meaning. Yet the carriers’ policies hark back to the days when they carried voice and their intelligent network “understood” the meaning.
The idea that you can’t enforce social policy in the network is still one that is difficult for policymakers to grasp. But such understanding is necessary in order to move forward. Copyrights should be respected but trying to enforce copyrights by inspecting packets would be like trying to reduce crime by requiring street lights only be used for legal purposes and having people sign an agreement saying so. (Would that even pass Fifth Amendment muster?). We need to address copyright concerns without doing harm. We seem to have forgotten the protections of the First Amendment and common carriage as we try to limit ourselves to only approved speech.
ATT bet billions of dollars on their ability to benefit from the value of the content in paying top dollar for MediaOne. They expected to get a fee from each e-Commerce transaction. On the west coast they invested billions of dollars in Excite@Home and tried to ban webcams as abusing their service. This seems like a distant echo of the Hush-a-Phone case in the 1950’s and the Carterfone case in the 1960’s.
(It’s remarkable to think that there was a time when people didn’t own the wires and devices in their own homes yet today communities still don’t own their own facilities!)
ATT paid dearly for their misunderstanding by going bankrupt and having SBC buying their name.
Yet we see the same kind of policies in the cellular world as the carriers require an account for each device and limit how we use the packets as in banning tethering and dictating how we use our mobile computing devices, AKA, smart phones.
Two Worlds
As we’ve seen, a home network is just wires and/or radios and there is no significant ongoing cost to using it. In order to reach the rest of the world we buy a connection from a provider. This is called “broadband” because that was the name for an older technology.
What is important is that looking outward from the home we can think of today’s telecommunications as a resource we can leverage by paying a monthly fee rather than having to run our own wires.
So the cost is $0/month for our own network and a monthly fee for a pipe to the rest of the world.
DIO Infrastructure for Connectivity
Working with Friends
As long as I simply want to connect devices within my home I can do it myself (with, perhaps, help from my family) by installing routers and access points.
But if I want to connect devices that are further apart I need some help from my friends and neighbors and that requires getting them to join with me. That might not be easy which is why we need examples from which to learn.
This is no different from our experience with other infrastructures. Road numbers didn’t “just happen”. At first private companies would publish books of directions with pictures showing the landmarks drivers could use to navigate. Then companies like Rand-McNally had the bright idea of painting numbers in streets and once the idea became obvious communities started to number streets at scale ranging from the most local streets up to national highways like US-1.
US-1 wasn’t built as a highway. Instead the governments simply posted signs saying US-1 as a guide for drivers. In a sense US-1 was created as a software application using existing roads and highways to form a system. Eventually we did build an interstate highway system but only after we had general agreement on the value of such a system.
Providing connectivity in an apartment house is basically the same as providing connectivity within our homes but at a larger scale. But it does require getting your neighbors to cooperate and that may not be easy even with the appeal of “cheaper Internet” let alone the other benefits.
As policymakers we have the luxury of approaching this problem very differently. If we have to convince our neighbors and face 100:1 odds of succeeding then the effort is indeed hopeless. But if we have 100,000 buildings then the odds are very different and that ratio means there are one thousand buildings that can serve as seeds and examples for others.
As I explain below, Castle Village in Manhattan is such an example and, as neighboring buildings join in, it’s also a seed from which we can grow.
Connecting
The connectivity within the building is similar to what we have in any local network. We can use any available wires and radios for connectivity within the building. Like DIY within the home, we have Do It Ourselves or DIO connectivity within the building.
Just as with home networking, there isn’t a significant usage cost for that local connectivity and a relatively low maintenance cost for local connectivity. This is why is we consider home networks as costing $0/month inside the home.
This cost of using the public infrastructure will go down as new generations of gear improve the resilience and self-monitoring of the local network. Packet routing is a software problem and subject to Moore’s law. Unlike telecommunications which reserves capacity (a channel) for each conversation, packets allow us to share capacity among all users. If one wire is down packets will automatically take a different path while identifying the failure so it can be dealt with. (More detail in the Appendix on how today’s Internet works).
To reach the rest of the world we can simply think of the entire telecom infrastructure as a resource we can lease. This is just what we do when we buy a “pipe” through a cable company’s broadband infrastructure (AKA, buy broadband).
In the case of Castle Village the fiber between the buildings and the point of connection is leased as just that – a fiber. It doesn’t matter whether it it’s carrying 200Mbps or 1Gbps. It’s just a resource. Past the point of connection there is a different kind of pricing depending on the particular deals made with the carriers.
As we extend the boundary of local connectivity we change what we think about the wires and radios we use. In the realm of telecommunications those facilities are owned by an operator who has an ongoing fee for their usage. But within the domain of local ownership we have the shared facilities as a public good – the fiber, wires and radios are owned by the community with no ongoing usage charge.
This is an important point – the same physical materials are viewed differently depending on our point of view! This is a key point because it means we can simply repurpose existing infrastructure.
But it also means more than that because as a public good managed by the community we will start to see Moore’s law style improvements. In traditional telecommunications innovation is in the form of finding creative ways to increase the revenue from subscribers with few alternatives.
By competing for the city’s business on the basis of the most capacity for the least cost the companies will need to be innovative in their ability to take advantage of infrastructure and gear. With software we can monitor systems in order to detect and route around failures. This allows us to maintain connectivity while doing repairs and resolving problems.
In a way it is like managing a two lane highway rather than a single lane – a simple failure like a stalled car is an annoyance but doesn’t stop all traffic. All available wires, fibers, and radios are available. And without the need to protect a border in order to assure that each bit is billable there is great flexibility – people can even fashion their own solutions by adding their own capacity.
Unlike today’s electric grid where adding a new path amounts to stealing electricity, with connectivity additional paths add to the capacity for all. Again we have the abundance of the commons.
Using Connectivity
The most obvious rationale for the common infrastructure is shared access to the web and so-called cloud services.
Connectivity within the building is basic common infrastructure. To use a simple example if we wanted to install temperature sensors around the building all you need do is stick them up on the walls and define a relationship. Defining that relationship might as a simple as typing in the number of the device into the control panel for an HVAC system. No need to run wires or do anything else – the sensors can send their signals using any available connectivity.
In practice it’s not quite that simple because today’s networking technology is not quite there. For example, while the Internet architecture is based on direct relationships between end points the current implementation of Wi-Fi requires setting up a relationship with the intermediate access points such as a security key. Devices have to be on the same network in order to communicate because the NATs I placed in the homes were treated as network boundaries rather than simply temporary shims.
If we’re to understand the benefits of connectivity we must distinguish these near term considerations from the larger concepts. This is why it is so very important that we don’t confuse the current implementation of the Internet with the powerful idea that it represents. In the end it’s about software and software can evolve as our understanding evolves.
The key is to work with what we have now with an eye towards what we can have in the future. These principles must guide our long-term polices.
Companies like Echelon are designing their building systems to take advantage of this new infrastructure. We’re also seeing a first generation of IP-connected light bulbs and other devices. More proprietary protocols such as Z-Wave, Bluetooth, Insteon and Zigbee are also being bridged to use the common connectivity.
Connectivity doesn’t stop at the boundaries of the building. We can process the sensor data at the furnace in a building or choose to use remote services or both as when a furnace is controlled by local sensors and managed remotely.
Once we’ve paid for the “pipe” through telecom we can just assume global connectivity as a resource. It doesn’t matter how each device is connected – whether on a corporate network in a management company or via a cellular connection.
This is not entirely true because today’s policies and implementation introduce annoyances everywhere. If you are using a portable computing device supplied by a wireless carrier then you might be subject to the rules set by a carrier whose goal is to maximize that carrier’s revenues. It’s just business but the net effect is to make a simple application like using a sensor seem very complicated.
After all, it’s pretty easy to send a message (such as a pulse measurement) from point A (a wrist monitor) to point B (a hospital’s computer). The difficulty is getting past each and every one of the gatekeepers and rent seekers who own the facilities between those points.
Public policy should be to assure access to our common facilities as a common good with a sustainable funding model that isn’t tied to any particular applications. This gives us all the opportunity to create economic value as well as improving our quality of life.
Leaving the Village
The residents in Castle Village, working with their board, can work together to fund common facilities across five buildings. As we’ve seen we can achieve radical economies of scale for connectivity both within the building and without.
This is a model that can easily be replicated in other buildings. The model can also be extended. Nearby buildings can share a pipe and a common support staff to achieve further economies. They can also use local caching services to better serve the local community without putting a high demand on the pipe to the rest of telecom.
Buildings don’t have to be adjacent in order to gain bargaining power – they merely have to work together. “Dark Fiber” is another interesting option. Remember that Castle Village is using 200Mbps on a fiber currently capable of a gigabit per second. By sharing the ownership of a fiber, buildings can use the full capacity between them. I said “currently capable” because new gear can typically get more capacity out of existing facilities just as copper wires went from a few kilobits per second to megabits for DSL when the carriers merely changed the way they operated the existing copper.
Having a myriad of options may seem complicated and confusing but it all becomes simple when we normalize it to simply exchange packets of bits.
The primary source of complexity is the business model of telecommunications which is based on the presumed purpose and value of each bit. Thus a phone call would be charged differently from a data connection. The same copper wire would be charged at a different rate depending on whether it was used as an alarm wire or as a phone or DSL wire. The fact that a $1/month alarm wire could carry megabits while a $10/month phone wire could only carry kilobits shows some of the problems in mapping the old pricing into the new reality.
The 1984 divesture of ATT was one result of what I call reality arbitrage which allowed third parties to use commodity facilities to compete with the carriers’ high value services.
The Internet takes this one step further. Whereas companies like MCI provided reliable channels in the same way that ATT did but at lower cost, with the Internet we don’t reserve channels. Instead the applications take advantage of the capacity available as a resource.
By adding generic capacity we increase the number of applications that just work without having each one vying to pay for its own facilities. This approach allows community investment to benefit all rather than giving a provider the incentive to increase revenues by pricing limited capacity at a premium.
Compare with Today’s Policies
The Internet is Fundamental
The idea of sending data over the phone lines goes back to the earliest days of teletypes. Actually, we were already sending data over telegraph lines long before modems leveraged the growing voice infrastructure and converted the analog signal back into digital. We also had native data services such as X.25 and frame-relay.
But the Internet is fundamentally different. It doesn't rely on the network maintaining a circuit nor reliable delivery. With radio packets networks like ALOHAnet there was no operator – just radios. And with Ethernet just a coaxial cable. It was the intelligent devices, computers that used those raw materials to exchange packets. Applications like file transfer existed only in those computers and not in the network. IP is the same idea but over longer distance using telecom as just another raw medium without depending on reliable circuits.
In about 1970 I took a class in which we studied various systems including the ATT’s ESS-1 computer – their first electronic switch. Later I read the specs for SS7 – the protocols for the intelligent network. It is a packet network just like the Internet but with different constraints – namely, as the name implies, intelligence is entirely within the network guaranteeing reliable circuits.
Both the Internet and the Intelligent Network are different ways of using the same physical wires (and radios) as a transport. The Internet has proven to be a far more flexible approach (for reasons I’ll go into later). “Internet” isn’t just another service – it’s a fundamental way of implementing services.
Competing Broadbands
Broadband and the electric grid typically use the same poles (and conduits). They seem very similar in that they both use wires (or fibers) and deliver a product to their subscribers.
One big difference is that the broadband (and the telephony) infrastructure were owned by an operator to deliver services such as television content and phone calls. In the 1990’s “Internet” was added to the mix but it wasn’t entirely new as the infrastructure was already being used for connectivity via modems that modulated (hence the term modulator/demodulator) the data signal so that it looked like a phone call. And, indeed, the initial service was modeled on this dialup connection.
The Internet is different in being a fundamental way to implement service as an alternative to SS7 (or, drilling down) ATM. And, increasingly the cable services are being implemented using IP as the basic transport. There is “air’ between IP as a transport and the services that use it.
The big difference is that SS7 and ATM are designed for use within an intelligent network. IP is intended for connecting intelligent devices outside the network without depending on circuits and reliable delivery.
If we are to realize the benefit of connectivity we must make IP connectivity available as fundamental infrastructure.
In a sense this is like making electricity available as a basic technology. In the early days of electricity we had municipal lighting companies and some of us remember the days when you could get free light bulbs courtesy of the light company. As more electric appliances became available it was obvious that the product was electricity and not light.
In the same way now that we can implement all services over IP (or, “OTT – Over the Top”) we need a business model based on transporting raw IP packets. This will lead to a rapid shift in the industry because it makes no sense for one company to bear the burden of a private infrastructure when there is a public facility available at no additional cost to the content provider.
As with electricity we have a natural monopoly because there is no differentiation. In fact attempting to maintain separate facility creates expensive redundancy without resilience.
Intelligence Outside
The electricity metaphor breaks down at this point because unlike electricity which is an expensive consumable, IP packets are just software – a way we use the facilities to have a conversation. There is nothing consumed – beyond a miniscule cost for maintenance that is more than made up with the money saved by having a common infrastructure.
The value, as in conversations, is created in the intelligent devices outside the network. It doesn't make sense to bill for packets of bits because they are just integers – the valuable meaning is interpreted outside the network. And IP packets work better without the restrictions of the traditional circuits and reliable delivery offered by telecommunications providers. With best-efforts we don’t even require every packet gets through. Instead applications adapt to what works and, as Skype and others have shown, it’s remarkable what we can do with facilities well-below the standards of telecommunications.
Without a consumable to sell nor the ability to add value with circuits and reliable delivery what is a carrier selling? If we go back to the comparison with the road system the value comes from the system as a whole, not any inch of street in isolation.
Facilitating
All of this leads to a different approach. The value of the connectivity benefits the community as a whole – you don’t associate a given brick with a given step by a given user. Instead the community works together to fund common facilities.
This is, of course, Do-It-Ourselves at the scale of cities rather than individual buildings. Eventually when the idea becomes widely accepted we think of it as a governmental function. This is what governments are best at – providing common municipal services like sewers and water. Providing connectivity is far simpler because of the resilience of the protocols and the ability to use software to create solutions.
Rather than the faux competition of competing broadband infrastructures we would have real competition by companies providing gear and support to the cities. Since we have a public good without exclusion anyone can extend the reach of connectivity.
The city’s role is simple – facilitating the exchange of packets. Where I live sidewalks are optional – they do make it easier to walk but people can also walk on grass next to the road or in the road itself.
This is very different from today’s municipal broadband because it is not funded by trying to make the common infrastructure a profit center. This puts the city in the awkward position of blocking the use of the common facilities. This creates artificial scarcity as well as making failure the default.
The irony is that it’s hard to prevent bits from flowing! The owner of the facilities has to put in extra effort to second-guess the meaning of the traffic.
Competing for-profit connectivity providers isn’t a sustainable business over the long term without cross-subsidy from a content business. It’s a commodity with no differentiation and no consumable to sell. The value is now created in the intelligent devices that are outside the network. Without the subsidies from the content business the only way to charge for carrying to traffic is to assure that unbilled packets doesn't pass. In a sense we are preventing people from communicate merely so we can create a billable event.
Having two broadband facilities, each of which can serve the entire city, doubles the cost but without synergy we get redundancy without resilience. We understand how to use and fund best-efforts connectivity as a public good.
More Benefit
As we’ve seen within a single building the availability of a common infrastructure is a resource for all purposes. We don’t need a separate system for public safety, for parks, for traffic etc. Note only do they all benefit from having a common infrastructure it becomes easy to implement new capabilities.
Once we can assume connectivity we can start to develop a next generation of solutions. For example a smoke detector can do more than just beep – it can send a chemical analysis and an exact location directly to the fire department. By comparison dialing E-911 harks back to the days when all phone calls depended upon speaking to switchboard operators. And how do you dial E-911 when you’ve had a heart attack?
Case Study: Connected Healthcare
A case in point is connected healthcare. The idea of wearing a pendant to summon help has been around since the 1970’s. The original idea was fairly simple and suitable for the technology of the day. The pendant had a radio and when you pressed the button it would send a signal to a dialer connected to the phone circuit and dial for help. In the 1970’s simply hooking up such a vital piece of equipment would’ve required the permission of the phone company. People were not allowed to create their solutions!
Today we can imagine such a pendant working anywhere enabling people to be mobile and also using it to carry on a conversation with the support people. But let’s look at the simple case of just summoning help.
We need to get the message from the pendant to response center. An obvious first step is to generalize the link to the phone dialer – today that is often done using Bluetooth. Bluetooth requires pairing so that the pendant is paired with the appropriate phone dialer. One advantage of Bluetooth is that we can generalize this to use a cell phone as a transit point so the user can be mobile.
This works as long as the pairing is just right – not only must the pairing be maintained but the cell phone must have an active account with a provider that has coverage where the person is.
Alternatively we can use Wi-Fi but there too we have an authentication problem given that so many access points are locked down be it with a security key or simply an “agree” screen.
There are so many ways for connectivity to fail.
But with an infrastructure approach without borders we have a much simpler problem. If any path works we can get the message through and summon help. It doesn’t have to be any particular technology as long as we find one that works. If a subbasement lacks coverage then the basement owner can extend connectivity to reach the basement.
While we can get the monitors to work using today’s technology such coverage requires negotiating with all the parties involved and then, to use another device, we need another special arrangement. And, even then we have a fragile solution.
Once we treat connectivity as common infrastructure we can easily deploy many such applications without having to make any special arrangements.
Taking Advantage of Opportunity
Smart People rather than Smarter Cities
Connectivity is only part of the story. Once we have a better understanding of the value of this commons we can enhance it by making rich information available. As transit systems have provided open interfaces we’ve seen the rise of applications that help people creating applications that help each other.
Rather than making the cities smart, we empower people to contribute to the system. Too bad Jane Jacobs isn’t around to see what has become possible for us to do by cooperating in the creation of our new infrastructure. She wrote about why the importance of keeping cities vibrant in response the sterile city planning in the mid 1990’s.
Imagine if we had location information available as a basic resource rather than just today’s GPS which only works when we can pick up a satellite signal.
What would it take for the medical monitor to report its location and correlate it with the location of the bus someone is on and assure that help arrives where the person in trouble is or that that person has transportation available as another option?
The pieces are in place so individuals can now create such applications without having to build new infrastructure. We’re already seeing happening as people are starting to annotate the world around them as with Google’s Waze service.
A Powerful Idea; Borderless Connectivity
Today’s Implementations
When I’m careful I try to avoid describing the Internet as a network because it is networking more in the sense of a social activity than the traditional telecommunications idea of networking as a service. Like traveling as something we do rather than assuming that railroads are the only means of transportation.
For that matter today’s Internet is a particular implementation of the larger idea of what I’m calling connectivity. When people talk about the Internet they comingle the social and technical issues. Technologies are often focused on the details of today’s protocols and how it fits into today’s telecommunications policies.
The very idea that the Internet is a network of networks assumes there are distinct, physical networks, which are interconnected. When we look at connectivity within a single building we use IP addresses which are issued by a provider. This makes local connectivity dependent upon a distant provider even for something as simple as turning on the lights.
The NATs, typically called routers, isolate the local portion of the Internet as if it were a separate network. If you connect to the wrong local network you can’t print on your home printer.
When we talk about Internet security and privacy we are talking about how we use connectivity rather than the connectivity itself.
That’s fine in the short term. We start with what we have. The basic idea of best-efforts connectivity is power – not only can we use today’s telecommunications infrastructure as a resource, we can use today’s Internet as the building blocks for the future and evolve new approaches as we work together.
Getting to Borderless Connectivity.
Using Existing Infrastructure
Given the abundance of physical resources and existing sources of connectivity we can focus on the intelligent devices and what we can do with available connectivity.
The goal is to allow us to think in terms of the application. If we have to worry about the network – metered connections, wireless links or whatever – then we lose the ability to focus on the application and all these other concerns get in the way and we find ourselves stuck. And after all that effort we might not even be able to know if a link three hops away is metered.
That cannot possibly work because that thinking is solidly framed in the world of telecommunications as a service going back to the days when TPC (The Phone Company) owned the device. To use a simple example – if I use Tethering (that means, making a smartphone a hotspot) offering Wi-Fi connectivity, how can a device using that connectivity know there is a metered segment somewhere later in the path?
The remote support company, LogMeIn, has its own virtual overlay that allows us to think only in terms of the LogMeIn addresses rather than the actual network. This is another example of how we put air between what we do with connectivity and the physical implementation. LogMeIn’s Xively division provides tools to connect things (sensors and devices) using the Internet’s connectivity.
The primary reason we tolerate providers owning the facilities in the middle is that we have very low expectations. We don’t expect a medical monitor to remain connected once we’ve left the hospital and we accept that a smart watch must be dependent upon one particular smart phone for its functioning.
We are used to be connected but only after we’ve setup a general purpose pipe as we do when we connect our home network through a broadband connection or a buildings facilities through a broadband connection. But we need manual intervention (and an account) for each new place we visit including friends’ houses.
Starting at the Device
Typically we consider the network border to be the point at which we need a billing relationships or otherwise have to authenticate ourselves. In the example of a tethering point removed from the device it isn’t useful to think that way. Instead we can just assume connectivity.
Once we do assume there is connectivity available we are then free to focus on the application and looking outward from the device.
This is the key to architectural thinking. When writing a program we think entirely about the application and factor out the all those details of networking. All those technical details of routing fade away because they can be hidden by software and protocols as in the example of LogMeIn’s virtual address space.
What keeps us from fully taking advantage of this abstraction are the policy assumptions that require a Byzantine system of billing and business relationships just to maintain each element of the infrastructure as a profit center. This is the legacy of telecommunications regulation.
A Powerful Idea: Borderless Connectivity
A Fresh Start
Rather than trying to fix each of today’s policies one by one we can take advantage this architectural model to build on the idea of borderless connectivity apart from the details of how we fund and implement the existing infrastructure.
This is a powerful alternative to an “IP Transition” and reflects how we navigated such transitions in the past. Roads were not simply trackless railroads but were a fundamental kind of infrastructure. Thus there was no attempt to modernize the ICC policies to deal with roads. Instead the ICC regulated trucking as a way to use roads for commerce as we’d used railroads but the roads themselves are managed by a department of transportation.
In the same way the FCC’s approach should be to wind down its role as we become more adept at using borderless connectivity as a basic resource. In its traditional role it should work towards providing IP connectivity without dictating exactly how the connectivity is used.
For example once we assure resilient connectivity we can treat emergency service as an application outside the FCC purview and part of the larger effort to assure emergency services. We shouldn’t build a separate infrastructure for public safety and instead need to understand how to use the huge resilient capacity of the public infrastructure.
We need to learn from the Interstate Defense Highway System which has served us far better than having a separate system just for military use. This would be more the case for connectivity because of the huge capacity available in an infrastructure where rapid improvement can be driven by the same market forces that have made gigabit Ethernet the inexpensive normal. Contrast that with the cost of building a public safety system out of a few megabits of scrounged capacity – a system which is far more likely to fail than a public system that can use any gear and any path.
We also can see the advantage of using connectivity rather than building a special system for E911 in the example of smoke detectors that report rich information or medical monitors that can directly summon help.
Policies
Network Neutrality
In the near term we should discourage efforts to block certain traffic while giving other traffic preference (for a fee). But this will be moot once we have a business in transporting raw packets apart from their applications.
Implicit non-neutrality is a more insidious problem. We see this in well-intentioned efforts to make the network smarter (as an echo of the intelligent network thinking). One example is buffer bloat in which the network operators provides buffering because such buffers improve traditional networks. But they subvert the Internet protocols by presenting a network that seems to have more capacity available until the buffers get full and everything grinds to a halt.
Small edge networks, such as public Wi-Fi, often have limited capacity so they attempt to ban unapproved applications such as streaming video in order to share the limited capacity. These efforts are also well-intentioned but we need to be careful to recognize that they are special cases and don’t represent generic IP connectivity.
The main source of non-neutrality, however, occurs in in the basic broadband implementations in which a cable operators treats the Internet as a service alongside the capacity it uses for delivering video.
By shifting the business model to pure IP connectivity we get real neutrality thanks to indifference to the purpose of the packets.
This is borderless connectivity and must be the focus of policy.
Comcast/TWC
As we’ve seen, Time Warner decided it didn’t need nor want its own infrastructure. Comcast recognized that the business of owning infrastructure was unattractive and thus bought NBC Universal.
The merger is important to TWC the money is made using connectivity and TWC finds itself as an intermediary as others create value using its facilities. We need a business model for common infrastructure that does not depend on capturing the value created outside the network to pay for the facilities. If the merger is sought to give Comcast more control then that’s another reason to oppose the merger.
To the extent that Comcast argues it is in an unfair position as companies like Google build out competing infrastructure it has a valid complaint. The remedy is not to compound the problem by allowing these companies to divvy up the market.
Instead we need to assure that Comcast, along with the rest of us, has even access to a common infrastructure. And if Netflix can use that infrastructure without bearing an extra cost as a delivery system then so can Comcast. And that’s the result that Comcast too should seek.
As a content producer they can benefit from a level playing field that gives them access to all potential customers. As a content broker it means they will be able to offer their product mix to everyone though it also means that they face competition everywhere. For the customers it means real competition rather than a duopoly divvying up the local market.
Opportunity Zones
We generally talk about giving businesses tax breaks and other resources in order to get started. That’s fine for refining old ideas. But where do the new ideas come from? There is no magic – often the ideas are obvious. When we have a new idea technology we see a flourishing of new ideas. The 1960’s saw computing go from primitive tube computers to the MIT’s Multics project and its offshoot UNIX.
Tim-Berners Lee was able to use the limited amount of connectivity available in 1990 to give us the World Wide Web. Imagine what would be possible if we could just assume connectivity everywhere in an area of a city.
We have the example of connected healthcare. If a housing project had ambient connectivity then medical devices would “just work”. Those who need assistance can be connected to their families and others who can provide help and champion the effort.
New HVAC systems could evolve as new companies experiment with how to manage the systems.
Of course we have large companies now designing complex systems but they tend to be silos – only their own products working with each other. Once we can assume connectivity we can take advantage of the technology available to allow individuals and small groups to rapidly try out new ideas instead of waiting for multiyear product cycles. They would be forced to find synergy with others rather than going for a winner-take-wall approach.
I can’t predict what is going to happen anymore than I could’ve predicted the web. But I can say that creating opportunity will give us the future beyond just innovative variations on what we already have.
Structural Separation
This is a term that has been used to describe a wholesale/retail separation of transport services from retail services and is a good idea in the sense that it reduces the conflict of interest inherent in having a content provider owning the facilities that its competitors use.
In a sense this was the approach taken by ATT in 1984 for divestiture. It spun out the retail services to the Baby Bells and kept what was seen as the highly profitable long distance transports. It didn’t work out because, as we’ve seen, it didn’t address the fundamental changes evidenced by the Internet.
Separation is an intermediate stage and it should quickly become obvious that we need an infrastructure approach rather than trying to sell networking as a wholesale, for-profit, service.
Municipal Broadband
Cities must be able to provide their own infrastructure. But they must also do so responsibly.
Legislation that prohibits cities from looking out for their own self-interest are problematic and hard to justify but we need to be careful in how we approach this. The efforts to ban municipal broadband are based in the traditional of telecommunications as a billable service.
We need to get ahead of the process and reframe such efforts as infrastructure rather than services. In that framing banning municipal connectivity would be akin to telling a city it couldn’t own its own sidewalks.
We shouldn’t stop assuring that cities should be allowed to own their own infrastructure – they need to be encouraged to do so.
One interesting opportunity is to give cities control over the copper infrastructure the telecom carriers want to abandon. Given the advances in technology since ADSL was first developed in the 1980’s we can start to tap into that potential at little cost. Simply by treating a 3000 pair cable as a common medium rather than 3000 individual lines we get a medium capable or 30 Gigabits at 1 Mbps per line and it is likely that we can get 100 times that capacity by simply upgrading the line cards in central offices.
It all depends on how you frame the question. If you ask about copper as a traditional telecommunications medium where a single break in a single wire means an expensive repair and each wire needs to be carefully adjusted, then, indeed, copper is a very expensive medium. If instead we look at all the wires as a common pool and use smart electronics to adapt to the wire as it is we can have abundant capacity. It isn’t at all about the copper itself – we can mix and match any technology because we normalize all the capacity to packets.
What the cities get in return is a common infrastructure that functions as a common good for all purposes.
This is a very different way of thinking of the common infrastructure which is why it is useful to have examples like Castle Village.
In most cities the carriers own the wires. One option is to build a new infrastructure as some cities have done. The other is to buy or lease the facilities from the current carriers as a make/buy decision. Over time just as Time-Warner spun out TWC I expect companies will ask to be relieved of the burden of maintaining the facilities, especially with polices that prevent them from gaining an advantage from owning their own wires. The risk for the companies is that once the city has a broadband infrastructure it won’t want to pay for a second one. In fact the broadband carriers may find them actually paying the cities to relieve them of the obligation to maintain the facilities.
One accident of history is that cities often do not own the poles used for power and electricity. This legacy of the 19th century makes no sense – as if cities had to lease their sidewalks from a third party. This is policy is indefensible and costly – especially when we have the burden of competing broadband providers. Each broadband infrastructure can support the entire city so why are we paying for multiple broadband systems and, now, having to replace the expensive poles just to make room for all the faux competition among identical packet transports.
Little has changed from 1889 to 2014. We’ve just learned how to bundle the wires more tightly and maintain virtual circuits within coaxial cables and fibers as lambdas.
IP native policies would greatly reduce the cost and provide the cities with abundant infrastructure. And it really doesn't matter if we use copper or fiber. What matters is whether we have an owner who can limit access to only subscribers or if we can use it as a public facility for the good of the community as a whole.
Notice the connectors in this inset – it is just like the cable TV connectors in your house. That’s because it is! It’s using analog wiring like the cable TV systems of yore rather than taking advantage of the simplicity of packet technologies. In effect it’s a huge tax on the infrastructure created by government policies keeping the legacy analog telecommunications policies alive even as we use the infrastructure for digital packets.
Public Housing Projects
Public housing is where muni-broadband meets DIO. While it may be difficult to implement connectivity at the city level it might be more feasible to provide connectivity in public housing as a basic amenity. It would be just one more benefit of using connectivity as a basic resource for other services in the building such as HVAC control, lighting, security, doorbells and other capabilities. This doesn’t mean that everything has to go over IP. Just that the opportunity is available.
Many of the projects are for those with low income who may also represent a burden on the medical system. This creates a huge “innovation zone” opportunity. People can stay connected with their families and children can look after their elderly relatives.
For children, connected education is also a very important draw – we can get a digital bridge rather than a digital divide (thank Ben Compaine for “digital bridge”). Simply providing connectivity is not enough – the children may also need help with computers and other technologies – but it is a major step forward and sets an example for wider use.
Beyond Public Access TV
In the past Cable systems have often been asked to support TV channels for community use. This is a legacy idea but a connected city should view an online presence that includes both the web and video, perhaps for city meetings, as a basic capability rather than an additional burden. If anything, the paper reports and posters are now secondary to a “web first” strategy. Without the limitations of channels the city can provide a rich set of video resources. A high school soccer game might be available to all without being limited to the boundary of a city.
Spectrum Auctions and all that
Spectrum is a construct that takes the concept of circuits and uses it as a principle for creating wireless channels using resonant frequencies. Spectrum auctions accept the premise of channels.
In a sense spectrum auctions are another face of structural separation in treating each frequency band as profitable real estate. But that model is not sustainable as all traffic is normalized to packets of bits.
Those frequency bands are the real origin of the term “broadband” – the term became associated with Cable TV in the days of CATV (Community Antenna TV) when the over-the-air signal was relayed and those broad bands were carried on cable. Today the term has also come to be synonymous with the Internet because that’s the way language works.
Just as competing broadband infrastructure doesn’t make sense, competing wireless bands no longer makes sense. Even more to the point there is no essential difference between wired and wireless bits and, still more, unlicensed local wireless connectivity means that any device can reach the world using a local radio (as with Wi-Fi). This reduces the value of owning wireless bands and, in time, should remove the need to license radios.
As public policy it no longer makes sense to police the use of radio frequencies. It’s akin to trying to license a color. Legacy gear depends on recognizing single frequency bands so we can’t immediately eliminate all regulation but we should move towards a generation of gear that uses resilient borderless connectivity with or without wires.
Once we can assume IP connectivity it will make far more sense to use it as a generic transport rather than traditional radio. This is already happening as services like Pandora over cellular are becoming increasingly common in cars even with the limitations of today’s cellular which gave us only a hint of what is possible.
IP Transition
While the FCC does recognize that we are moving to an IP-based infrastructure it’s more than a minor transition. We can’t simply replace ATM with IP. We need to think very differently about connectivity.
The mission of the FCC is to assure our ability to communicate and support vital services. It can do this best by setting in motion market forces that provide sufficient capacity so that we can depend on the new infrastructure. In fact, we can do better when we have a resilient rather than a brittle infrastructure.
As that happens and, for example, we will have an emergency system that works better than today’s E911 because we will have the ability to have sensors (such as smoke detectors and medical alerts) send direct messages to the appropriate responders. IP-based systems will be resilient and work even when wires are down because they can still send messages by alternative means even if there isn’t capacity for a voice call. The FCC will be able reduce its role and cede it to agencies which are native to the new infrastructure such as the Department of Connectivity.
In the interim we need to frame policy issues such as how to deal with the existing copper infrastructure in terms of the new landscape. There is a certain nostalgia associated with copper but when we take a critical look at it, it is just another medium. We do need to assure power in an emergency but managing distributed storage (AKA batteries) is a future direction. (Managed power is a topic in its own right – a topic I plan to write about in a future essay.)
Security and Privacy
Historically the FCC has been concerned about communications in the sense of speech, not just the technology. Those are distinct meanings of the word and have only a passing relationship but we tend to confuse the two because when intelligence is inside the network rather than in the devices the network operators are indeed preserving the meaning of the content, AKA, speech.
This kind of economy of meaning is part of the way language evolves. Thus we use the word “radio” for the technology of wireless as well as the business of broadcasting music to the point that Pandora bills itself as “radio”.
The raw packets of the Internet are a means of communicating entirely decoupled from their meaning. The intermediaries (carriers) aren’t even aware of the meaning of the messages.
Today the idea of communications (infrastructure) as a for-profit business works at cross purposes with the need for borderless connectivity. The carriers cannot be neutral if they are to judge the value of the content so they can charge for some traffic as being more valuable than others.
We have a similar problem with implementing social policy inside a network. Here too we are tasking the network to decide which bits are good and which bits are good and which are evil. Numbers are just numbers and trying to second guess their intent is problematic at best.
Security and privacy are part of the larger question of how do we adopt traditional policies and social practices to the new landscape. Agencies like the FTC are already starting to address these issues.
The first challenge, as with the IP transition, is to understand the new social literacy.
Better, Rather than Smart Cities
It’s easy to understand why we read about smart cities and all the gigabits of capacity needed to carry all that big data. It’s a very exciting idea and there are vendors eager to supply the gear to carry and process vast amounts of information.
But we need to be careful and distinguish between creating infrastructure and enabling technologies versus making the cities themselves “smart”. We need to heed the lessons of the urban renewal projects of the 1950’s and Jane Jacobs’ reminder that we need to keep cities vibrant by providing enabling technology.
A million sensors generating, each generating a message every second might generate a few hundred megabits across an entire city. That really isn’t much. And the data itself is most valuable locally – a temperature sensor (what we used to call a thermostat) controls the local temperature.
When we do need to process the data remotely as in medical alerts or aggregate information as in managing traffic flows we need to assure that we don't create unnecessary impediments.
This is why, again, borderless connectivity is the key. Fat pipes are nice but we have gigabits of capacity throughout the city. It’s just that that capacity is locked within each provider’s facilities and available only at a fee. It’s as if we had many one lane roads but none available for public use.
Simply providing access to that capacity via Wi-Fi would give us many megabits everywhere at a very low cost. We could then give the people in the city, both individuals and those creating municipal services, the ability to use their smarts to give us a better and vibrant city.
Impact on existing businesses
Telcos and Other Networks
When we shift to funding our common facilities as a public good rather than as for-profit pipes what happens to companies who own those pipes?
Even in the absence of new policy initiatives these companies are struggling to come to terms with the need to make money by using rather than owning networks. For Comcast and Time-Warner the future is in creating and selling content.
Verizon and ATT have been trying to extend their business model into providing new services and investing in cellular. But even there they face pressures. The smartphone did away with revenue sources like ringtones, application like WhatsApp challenge SMS revenues and Wi-Fi first is an alternative to cellular for voice.
A more fundamental change is necessary. This may be very difficult for today’s companies as we’ve seen with the failures of ATT. But that’s the genius of capitalism – by shifting the capital and recycling assets society can benefit even if individual enterprises fail.
The FCC was created in response to a market problem caused by high capital costs and little differentiation. We seemed to have no alternative. Today the Internet has shown us an alternative. The FCC can now let the market work its magic by having different business models for the common infrastructure and the services we create using that infrastructure.
There is a great deal engineering and product creation expertise inside today’s carriers but is viewed as a cost center. With an infrastructure approach those become skills sought after by the new customers – cities and other communities.
Smaller ISPs
Today there are many smaller ISPs (Internet Service Providers) such as Panix which is working with Castle Village and WISPs (Wireless ISPs) which use Wi-Fi as a way to build infrastructure.
They are facing a transition similar to what happened with the many BBS (Bulletin Board Systems) companies that served the online market before the Internet. In fact, many of the BBS companies became ISPs.
Many of these companies will become the new infrastructure providers for buildings and communities. Along with the divisions of existing provider we have a wealth of expertise.
The availability of borderless connectivity will create opportunities for those with application skills to take advantage of the abundant opportunities to build new services – both those that serve existing businesses and ones we can’t imagine today.
It’s a chance for innovation to meet opportunity! That’s when we see more than incremental improvement – we can invent our futures.
.
Appendix: Language & Meaning
One of the challenges in talking about connectivity is that we often use words in lieu of understanding so we get the illusion that we are communicating when we are talking past each other.
The Internet. This is one of the most confusing words in today’s policy discussions. We tend to use the same word for the technology and policy with little understanding of how it actually works. The dirty secret is that the basics of getting a packet from point A to point B are very simple. That’s why the Internet works so well. But things do become complicated when the technology is entangled with business policies that put third rent-seekers in the path and when we get telecommunications specialists trying to help by building their assumptions into the infrastructure.
There is too much confusion to address in this paper – I’m planning to write a separate document. For the purposes of this essay we need to be wary about the “givens”. For example, claims that we need to make sure we favor voice traffic is “proven” by showing that some systems do just that. Yet the triumph of VoIP services like Skype is that they do not depend on such favors. No wonder it’s hard to understand what is really going on when the examples we have confuse current practices with future possibilities.
ISP (Internet Service Provider). This is a term that presumes that “Internet” is a service delivered in pipes (the broadband framing). It’s closely related to the term Internet Access that implicitly assumes that we can access an Internet out there somewhere. The terms frame the discussion in terms of traditional telecommunications.
Connectivity. I use the term connectivity to emphasaize the relationships between the end points without regard to how the connections are implement. Today’s Internet protocols and technologies will need to evolve beyond the current implementation if we are to get the full benefits of the powerful ideas that the Internet represents:
Borderless Connectivity. This is a term I use to further emphasis that there is no identifiable network owned by a provider. We make connections between end points anywhere – whether devices are next to each other or on the other side of the world.
Ambient Connectivity. This is different take on connectivity (http://rmf.vc/AmbientConnectivity) emphasizing the ability to assume connectivity is “just there” as basic infrastructure.
Radio. Language is very concise. The technology of radio and the business of broadcasting music were aligned in the 1920’s so we came to use the word “radio” for the business of broadcasting music over the air. Thus we talk about “Internet radio” and, in particular, Pandora positions itself as a new kind of “radio”. In day-to-day usage that’s OK but when we are talking about technology and policy we need to be sure that we understand what we mean when we use the words.
Communications. This is a good starting point – what does it mean to communicate? Just as radio is music, the technology we used to communicate such as telegraphy and telephones have lead us to conflate policy of the facilities we use to communicate with the meaning (or speech) itself. When a telecommunications company carried meaningful messages – it had to “understand” the message in order to allow us to communicate at a distance.
The school of communications is not in the engineering department and with best efforts the meaning is no longer in the network. The Federal Communications Commission needs to be explicit when it managing the technology and when it is regulating speech.
Broadband. The term broadband has been used to describe the frequency bands we use to communicate. It’s also been called wideband. Another technique is baseband (as used in the original Ethernet) in which we don’t separate the signals into frequency bands but instead use the entire range.
A radio wave operates vibrates at thousands or millions of times per second (the rate is its frequency measured in Hertz). When a radio wave oscillates much faster (at 500 Terahertz) we call it a color.
Starting the 1940’s some communities would mount shared TV antennas on a hill and carry the entire band of TV radio waves over a cable so that everyone in the town receive TV from distant stations. Because it relayed the over the air broadband signal used for television the business was called broadband.
The word has also taken on additional meaning as carrying many bits per second – high capacity connections because it just sounds like it should mean “fat pipe”. A provider would own the broadband and use it to deliver their services – again, broadband.
When we use that broadband connection we are actually tunneling through to the Internet and working at cross-purposes with the business model of broadband in bypassing the provider’s services. We’ve extended the term broadband to include this connectivity. This is confusing because the business model of broadband has a provider in the middle whereas with the Internet we are not depending on the provider. In a sense we are connecting despite the broadband business model not because of it. No wonder people are confused – in a sense broadband is the antithesis of the Internet.
Cable. Here too we see a business term associated with the coaxial cable used to relay the broadcast signal. The emphasis is on the video channels offered. Thus FiOS is considered “cable” even though it uses fiber for its delivery and DirecTV is also in the cable business. Thus I use the term loosely to mean the channel bundles offered by cable companies containing basic cable (ESPN, Comedy Channel, and CNN) as well as HBO, SHO etc.
Over-the-Air (OTA) and Over-the-Top (OTT). Over-the-air is simply traditional broadcast television. (Though I’ve seen ads offering wireless cable as an exciting new offering). OTT means content over IP and is also known as TV Anywhere. It is typically associated with licensed content such as HBO where you have to use your cable subscription ID to sign in to the sites. In effect the cable is simply a way to show you’ve paid for a license. It isn’t a big step to say that you don’t need to bother with a cable box.
Information. This is a common English word that has a lot of semantics associated with it but, like the word “communications” there is also a strict technical sense. Just as we don't confuse physicists’ use of “ergs” as a measure of work with work in the sense of labor. We shouldn’t confuse the technical use of information as a measure for the capacity of a channel measured in bits with the day-to-day use of the word. Traditional telecommunications uses bits as a measure of information within a channel. Today our meaning is no longer confined to channels so it doesn’t even make sense to ask how many bits are in a pause.
Circuits, Pipes, Channels, Bands, Tubes All similar terms for the idea of treating messages like flows or freight that have to be managed. This seemed essential and information science is all about these channels. But the Internet is fundamentally different. The meaning is no longer in those channels and we don’t even require that all the packets get through. Bit Torrent is an example of a protocol that not only sends files through multiple paths but can also gather the pieces of disparate sources!
Moore’s Law. This term, in the strict sense, comes from Gordon Moore’s 1965 paper saying that the density of transistors in an integrated circuit would double every two years. The term has taken on a wider meaning to refer to the rapid improvement in price performance that we’ve seen in technology. In 1996 I wrote a chapter (http://rmf.vc/BeyondLimits) explaining that we need to understand hypergrowth (and Moore’s Law) in terms of markets. And we see this with telecommunications vs. the Internet. If we take the same wires and radios and lock them into the telecommunications business model we see only slow improvements and, sometimes, increased costs. But those same physical materials in the context of the Internet have shown the kind of hyper-growth we expect from Moore’s Law.
The Tragedy of the Commons. This was the title of a 1968 article in Science (http://www.sciencemag.org/content/162/3859/1243.full). The basic thesis is that we will inevitably be fighting over a fixed pie. The Internet has shown us that the facilities we use to communicate are not a fixed pie. In fact, by taking advantage of opportunities we have an abundance of the common. The real tragedy may be the failure to understand how we can work together to realize this abundance.
The article is available at http://www.sciencemag.org/content/162/3859/1243.full.
ALOHAnet. An early (1971) radio packet network that allowed computers to communicate with each other over large distances. Because packets could easily be lost the researchers developed techniques for dealing with what we now call best-efforts network. In a sense Ethernet is Aloha on a cable. By containing the radio network within a cable Ethernet avoiding the need to deal with the FCC.
Wi-Fi. This is the name for a particular wireless technology (also known as IEEE-802.11). We need to be careful because it is another technical term that’s also used to describe the way we use the technology. “Free Wi-Fi” really means that there is readily available connectivity to the rest of the Internet (ambient connectivity). We’d get the same benefit for a wired connection.
Readings
I go into more detail on some of these issues in the columns and essays I’ve written over the years.
Purpose vs. Discovery (http://rmf.vc/PurposeVsDiscovery). In this essay I explain how we get abundance by not having preconceived notions of what must work and instead we need to discover what we can do with the resources available. In 1996 I wrote Beyond Limits (http://rmf.vc/BeyondLimits) which explains why Moore’s Law (more generally, hypergrowth) is a result of markets not physics.
HTM5 (http://rmf.vc/IEEEHTML5) goes into more depth on HTML5 which is far removed from the simple <h1>Hello</h1> example and it has become a full-fledged operating system and basis for safe programmability.
Other columns in the IEEE Consumer Electronics Magazine go into depth in other issues:
http://rmf.vc/IEEERefactoringCE about how the Internet was discovered and not invented and its impact on the business of consumer electronics (and, today, IoT).
http://rmf.vc/CILight about why it is more important to make simple things simple rather than assuring we can solve hard problems. We are willing to allocate resources to solve a few hard problems but enabling technologies enables everyone to discover solutions that they can’t necessarily anticipate.
More of my columns in the IEEE Consumer Electronics magazine http://rmf.vc/IEEENotTheMessage, and http://rmf.vc/IEEENotScripted.