Quantcast
Channel: Internet Infrastructure Archives - High Tech Forum
Viewing all 44 articles
Browse latest View live

Making Time-Sensitive Networks Happen

$
0
0

Last time, we discussed the promise and challenge of content-centric networks. Current events – the WannaCry worm in particular – make one CCN feature especially important: CCNs have heightened security because they can recognize anomalous traffic. This feature is used in some infrastructure networks today to provide an additional layer of security. The key is understanding what normal traffic looks like and intervening when network monitoring detects a departure from the pattern.

WannaCry, for all its scope, appears to be a financial failure. While other ransomware attacks have taken in millions of dollars, estimates of WannaCry payoffs are less than $100,000; it part because it was very poorly coded and well publicized.

Next time may be different, but except for UK’s National Health Service, we dodged a bullet this time. Regardless, security is one of many unsolved problems for the Internet; time-sensitive networking is another.

Time-Sensitive Networks Satisfy a Real Need

The Internet is great at transferring lots of data from any device to any other device at low cost. This is because it shares resources dynamically. Instead of each device holding exclusive access to a narrowband circuit, the Internet pools communication resources.

This means that devices have access to wideband channels on the assumption that they use full capacity on an occasional basis. The key to “fast and cheap” is the episodic use characteristic of web browsing: you load a page and you read the page. While you’re reading, someone else is loading and vice versa.

Video streaming is also episodic: Netflix sends you a clump of traffic, and you play it out. Towards the end of the clump, they send you another clump. And so on.

Networks provision capacity according to typical use, so the network is generally very fast. But network performance on today’s Internet is more prediction than than guarantee. It works well most of the time, but sometimes it doesn’t.

The tolerance to higher-than normal delay is part of the bargain. Unfortunately, high delay doesn’t work for all applications. That’s where time-sensitive networking comes in: it ensures that applications that need low delay can get it all the time.

What Applications Need TSN?

Many Internet of Things applications are time-sensitive, especially transportation systems in Smart Cities:

We live in a world with self-driving automobiles, and reusable self-landing rockets.   Such complex systems of multiple sensors, computers, and actuators form a Cyber-Physical System (CPS).  Some CPSs will operate at scales that will challenge comprehension.  From autonomous platooning vehicle convoys communicating with each other and highway infrastructure, to Smart Cities coordinating resources (avoiding traffic congestion, coordinating parking, reducing emissions and power consumption), to Smart national power grids and beyond.

Just as CCN coordinates consumers of common, replicable information with each other, TSN coordinates users of time-sensitive information that can’t be replicated. Rather than spreading the pain of congestion as today’s Internet does, TSNs prevent congestion by organizing the activities that lead to congestion when they’re unmanaged.

TSN Isn’t End-to-End Networking

The current Internet, with some significant exceptions, is fairly stupid. The network is generally transparent to applications. It manages congestion by randomly discarding packets when congested, counting on applications to slow down as they lose information.

This works well for the web, email, video streaming, and backups. It doesn’t work well for communications and for real-time sensors. TSN requires the ability to immunize certain applications from packet loss and high delay.

This requires two capabilities that have not been parts of the traditional Internet: pre-emption inside the network (immunization) and signalling between the application and the network to identify the need for pre-emption.  Pre-emption and signaling get the job done on private networks without an additional step.

But these features have costs for public networks because low delay is a scarce quantity even in high-capacity networks. One way we allocate scarce resources is to assign a price to them.

Low Delay is Worth Something

 

Enablers for Critical Control of Remote Devices source: Ericsson

While the ordinary, delay-tolerant application would not be willing to pay for delay immunity on a short time span, other applications are different. High-Definition voice is one such application, and smart transportation is another.

These are both cyber-physical systems. HD voice is driven by requirements of the ear and brain that can’t be changed by the way we code applications. The network has to adapt to the use case.

5G standards are organized around use cases that today’s networks don’t support well, if at all. Perhaps the most challenging is “critical control of remote devices”. CCRD includes remote control of heavy machinery, factory automation, real-time monitoring, smart grids, and remote surgery.

The applications include large, heavy, physical systems that take time to adjust and realign. So the network can’t have too many of its own limitation since the application is so challenging.

Regulation Without Exceptions

In today’s communication landscape, this applications will use separate communications channels, so they would quality as “non-BIAS data services” under the Wheeler Open Internet rules. Splitting applications off Internet pipes in order to satisfy their security or performance characteristics puts us on the wrong side of the thing that makes the Internet special: it takes the application out of the common pool and gives it a dedicated circuit.

Segregating applications from each other is probably always going to be necessary in special cases, but it’s desirable to limit the practice. So here’s a challenge to regulators: let’s work toward reforming our Internet regulations so that 5G uses cases and run over the same facilities as the traditional web, governed by the same regulations as the rest of the Internet.

I think we need the ability to offer virtual services that use software-defined networking to merge and coordinate diverse applications like CCN and TSN over the common Internet resource pool. But the regulatory problem needs to be solved by Congress and the FCC before the new services can become real.

 

 

The post Making Time-Sensitive Networks Happen appeared first on High Tech Forum.


Highly Illogical Broadband Claims

$
0
0

The US jumped four places in Akamai’s most recent State of the Internet report, from 14th to 10th. Average connection speed increased from 17.2 Mbps to 18.7, an increase of 8.8%. The US increase was the second best in the world, behind little Korea’s 9.3% rise.  These measurements are taken from real life TCP streams, between end user machines and Akamai servers.

They don’t reflect actual broadband speeds advertised by ISPs because multiple streams are active at the same time. A more representative picture of overall broadband speed is the Akamai average peak connection speed (APCS), which hit 86.5 Mbps. This figure isn’t precisely representative of broadband speed because it’s helped by queueing upstream of ISP networks.

It’s always good to see broadband speeds going up. The cable company lobby, NCTA, celebrated the good figures with a blog post that drew a strange attack from net neutrality advocate Public Knowledge.

An Illogical Rate of Increase Claim

Average US Broadband Speed Improvements Over Two Year Spans. data source: Akamai State of the Internet reports

Public Knowledge claims that US broadband speed has increased at a faster rate since the 2015 Open Internet Order was imposed. The problems with this assertion are many:

Indeed, as the NCTA graph shows (based on the latest Akamai State of the Internet Report), the average speed of broadband connections has not only continued to rise since the FCC first adopted net neutrality rules in 2010, but the rate of increase has accelerated since the FCC adopted the Title II reclassification Order in February 2015.

In the first place, PK doesn’t quantify the rate of increase, they just say “look at the graph!” In fact, US broadband speeds have increased at a slower rate since the 2015 order was passed than they did in preceding two year intervals other than the drought that followed the 2010 Open Internet Order.

Over the two years since the order was passed, APCS has increased by 38%, the average of all annual averages of two-year increases. The overall average is depressed by dreadful figures in 2011, in the shadow of the 2010 Open Internet Order. But Open Internet Orders are not the root causes of broadband speed.

Regardless, in 2014, the year before the 2015 order was passed, the average two year increase was 45%, much better than what we’ve seen recently. The best years for two year rates of increase have been 2013 (50%) and 2014 (45%).  Coincidentally, these years spanned the court challenge to the 2010 Open Internet Order and the lawless period that followed the court’s rejection.

So Public Knowledge is peddling a fake fact.

An Illogical Explanation for the Fake Fact

Fake facts are useful when you need to support a fake argument. That argument, of course, would be the “Virtuous Cycle (or “Circle”, they call it by both names) Argument”. This abstraction proposes that broadband quality (or maybe investment, speed, or some other property) increases because of demand for (or maybe “by”, that’s unclear) applications and their users.

As networks become more magical, applications get even more wondrous. This pressures networks to become even more fantastic. But none of this can possibly happen unless networks are strongly regulated by the FCC, apparently.

The Virtuous Cycle/Circle is supposed to be an economic theory, but there’s no record of it in the economics literature. It has also been widely criticized by economists and others. At best, it’s a conjecture since there’s no empirical support for it; at worst, it’s simply a convenient post hoc rationalization.

What Really Explains Faster Broadband?

While it’s comforting for partisans to blame regulation for reduced rates of investment and slowing rates of improvement in network quality (or to credit regulation for increased investment and quality, as the case may be) there’s more to this game than mere regulation.

Fundamentally, network innovation, investment, and utility flow from technology and human needs. Regardless of the behavior of DC regulators, networks are going to keep improving.

People are going to keep on using applications too, and not simply because networks somehow force them to do so. In fact, the desire to use network applications comes from the appeal of applications in their own right. And this is made possible by better hardware and software in mobile devices.

Moore’s Law is the Logical Explanation

The real driver of the Internet ecosystem is chips and software. As processors in smartphones get better, programmers can create better applications. As processors and specialized networking chips get better, engineers can make networks faster, cheaper, and more reliable.

The virtuous cycle argument places the causes of innovation in the wrong place, in other words. The forces that make applications better also make networks better. And these forces are external to both networks and applications.

In effect, the FCC argues that man descends from monkeys, but the reality is that man and monkey share a common ancestor. The FCC cops a trick from diet fads that say eating carbohydrates makes us fat, when the reality is that eating too much makes us fat, regardless of what form the calories take.

When we understand that the completely unregulated markets for software and semiconductors drive improvements in applications and networks alike, we can better grasp the importance of  Augmented Reality.

We’re on the Cusp of Marvelous Developments

Apple’s Worldwide Developers Conference showcased developments in AR such as the ARKit, a set of developer tools for building better augmented reality applications. Pokémon Go showed us the way, but it’s stale now. That doesn’t mean AR is dead, it means we’re on the verge of the next step.

The new normal in computing is voice input instead of keyboard input. Instead of looking down at screens, displays will sit on our noses all the time. And instead of blasting sound all over the office, we’ll hear it from ear buds.

Instead wearing heart rate monitor straps, our watches will continually monitor heart rate and blood sugar. Our emotions will become inputs and outputs to the computing experience. Instead of sitting in our desks, computers will travel with us wherever we go.

These Developments are not Regulation Driven

The new paradigm for computing is happening, not just in the US but all over the world. It’s being made by entrepreneurs and innovators willing to invest and take risks, not by bureaucrats who get paid to say “no”.

The FCC can’t stop it, and by and large it can’t even alter the speed at which it comes about. That’s because innovation is global and the FCC is merely domestic.

What the FCC can do is help to keep large swathes of the American population from falling behind. And it can do this by saying yes to network deployment and innovation. A good first step in that process is to let go of the vacuous virtuous cycle of networks + apps innovation. That argument is illogical.

[Disclosure: I own Apple stock.]

The post Highly Illogical Broadband Claims appeared first on High Tech Forum.

Open Internet Orders Degrade Internet Improvement

$
0
0

This is a followup on my post about claims that broadband speeds got a bump from the FCC’s 2015 Open Internet Order. That post included a table with all the relevant data, but it’s easier to visualize with charts. Clearly, the orders degraded the rate of improvement in broadband Internet service in the US.

The data is each year’s average peak connection speed, taken from Akamai’s State of the Internet report. This figure is the average of all four quarterly improvements in the year compared to the corresponding average for the previous year.

The average annual improvement from 2010 to 2016 is 23 percent, represented by the blue line running horizontally.

2010 Data Has Problems and Was Omitted

I’ve omitted the data for 2010, when the rate of improvement was -5%, because the 2010 Open Internet Order wasn’t passed until December. There was a lot of fear on 2010 that a Title II order was on the offing, based on language in the NPRM.

The first quarter of 2010 saw a 9% decline in speed from the first quarter of 2009. The second quarter was even worse, declining 49%.

A decline this sharp suggests a problem in the underlying data set because it’s a complete outlier; speeds have always improved except for 2010. So I start the averages with 2011, when the 2010 OIO took effect and was challenged.

Nothing Spurs Improvement Like a Court Challenge

The rates of improvement graph suggests that nothing stimulates broadband speed improvement like a court challenge. The best three years span 2011-2013, the time when the 2010 Order was under review. The average for this period was 28%, while the overall average was 23% as noted.

The passage of 2015 OIO led to an immediate decline to 18 percent for 2015 and a slight recovery to 22%. But the average improvement since the 2015 order passed is 20%, less than the overall 23% average.

The best two years for US broadband improvement were 2012 and 2013, the when the 2010 order was under review by the DC Circuit Court. The improvements in these years were 26% and 32%. With a two-year average improvement of 29%, these were the good old days.

Nothing Retards Improvement Like an Open Internet Order

In the years when new Open Internet Orders took effect, 2011 and 2015, sharp declines in broadband improvement were the order of the day, a below average 22% in 2011 and a sharply below average 18% in 2015.

Even when the figures for 2016 are taken into account, the numbers show very clearly that Open Internet Orders are a drag on the rate of broadband improvement in the US. The numbers also show that the Title II order did more damage than the 2010 Title I order.

We want our broadband speeds to improve. The data show that the best way to make that happen is to challenge open Internet orders, especially those that classify broadband Internet service under Title II.

 

The post Open Internet Orders Degrade Internet Improvement appeared first on High Tech Forum.

Congestion Pricing for Infrastructure: I Still Don’t Know Why Net Neutrality is Important

$
0
0

A month ago I wrote a blog post questioning the importance of net neutrality: Remind Me: Why Should I Care about Net Neutrality? Luckily for me, law professor seeks to answer my question in a short piece in Wired, Why you should care about net neutrality. Sadly, I’m not convinced.

Frischmann (and co-author argue that the Internet should be a free for all, just as roads supposedly are, or used to be. While both roads and the Internet have to deal with the scourge of congestion, Frischmann believes it must be resolved by waiting (except in the case of emergency vehicles, of course). If the sole protocol is the first-come, first-served rule, the pain will be spread fairly:

But while smart systems seem attractive, they’ll inevitably be optimised for corporate profit and control. The principle of first-come, first-served is our best protection against interference. We need it on the web – and on the roads.

Frischmann is regarded by his critics as a bit of an ideologue, wedded to the open access model of shared infrastructure and unwilling to consider issues raised by public choice theory. A review of Frischmann’s book Infrastructure: The Social Value of Shared Resources by Adam Thierer highlights shortcomings in his open access model.

Such a Quaint Story

For starters, Frischmann’s view of roadway management is out of date. Economists have long realized that indiscriminate approaches to traffic management on the roads don’t lead to welfare-maximizing results. The high-tech alternative is congestion pricing, a system that encourages drivers to trade money for time and vice versa.

Congestion pricing wasn’t practical to implement in the past, but advanced technology has changed all that. Singapore is on its second generation of congestion pricing now, with a system that sets dynamic prices for access to congested areas:

In 1998, Singapore replaced the system with the Electronic Road Pricing (ERP) program, which uses modern technology. At the start of the journey a Cash Card is inserted into the On-Board Unit (OBU), which is fixed permanently in the vehicle and powered by the vehicle battery. When passing an ERP gantry the cash balance after the ERP charge deduction is shown on the OBU for 10 seconds. The electronic system has the ability to vary the prices based on traffic conditions and by vehicle type, time and location. Today all vehicles are charged, only emergency vehicles are exempted. In 2005 the coverage of ERP expanded the gantries around Singapore central business district and on major arterials and expressways. To ensure optimal use of road space and to maintain optimal speeds, the system is revised quarterly.

This system measures congestion dynamically and uses AI and big data to set prices optimally. The net result of this system is greater use of public transit and less congestion. This leads to lower carbon emissions, of course.

Modern Approaches to Congestion Pricing

An op-ed in the Wall Street Journal by economists Peter Cramton and R. Richard Geddes, “How Technology Can Eliminate Traffic Congestion“, examines the benefits of congestion pricing on the level of nations. One of the benefits of dynamic pricing systems they cite is smarter investments in roadway upgrades:

Accurate road prices will also help us make smarter infrastructure investments. A new lane, for instance, can be targeted to where its value—as reflected in prices—is greatest. Real-time road prices will reveal that value, thus reducing or eliminating the politicization that has afflicted infrastructure investment.

Of course, we have dynamic road pricing in the US as well: When I drive to Boulder, Colorado on US 36, I can choose a fast lane over lanes with the normal level of congestion. Drivers in the San Francisco Bay Area have the same option, one that I used on several occasions when running late to a meeting or a flight.

What Does this Have to Do with the Internet?

Arguing that the Internet should work just like the roads actually means that “pay for priority” fast lanes should be legal. Frischmann denied this in a Twitter exchange:

In fact, access to restricted areas is an example of paying for priority as the alternative is taking public transit. Prices are set by area, which effectively means destination. The whole system is based on willingness to pay (WTP). The benefit that Singapore cites is increased use of public transit.

We Can’t Simply Build our Way out of Congestion

Congestion is a fact of life in shared infrastructures that we can’t eliminate by adding capacity willy-nilly. The trouble with adding capacity is that people take it as an invitation to use the commons more intensely, so the congestion comes back.

A congestion-free infrastructure would need to be orders of magnitude more expensive today’s systems. If such an infrastructure did solve the congestion problem, it would ultimately succeed because high prices reduce demand.

So the way out of the congested infrastructure dilemma is to put supply and demand into equilibrium. This generally requires pricing access in a way that promotes investment. It also means putting buyers and sellers into a dialog with each other, something that doesn’t happen when regulators erect walls between us.

This latter dynamic underscores the importance of Thierer’s shoutout to public choice theory. Because regulators are no smarter or more virtuous than the rest of us, agencies will always be subject to capture and special interest manipulation.

Quality Pricing

In a recent essay, “Broadband service quality: Rationing or markets?”, network theorist (and High Tech Forum contributor) Martin Geddes argues that we don’t price and sell residential broadband service in a rational way:

Today we characterize broadband services by their bearer (e.g. cable, DSL, 4G) and service “speed”. This does not sufficiently describe the service. Specifically, measures of average or peak “speed” do not define the quality on offer, and it is the quality that determines fitness for purpose in use.

We mistakenly confuse speed with quality, and tend to regulate practices that affect speed whether they impact quality or not. Paid prioritization is the best example. Despite activist rhetoric, it’s actually possible by increase the quality of some Internet streams without affecting others.

The BITAG Differentiated Treatment of Internet Traffic report proves this claim:

When differentiated treatment is applied with an awareness of the requirements for different types of traffic, it becomes possible to create a benefit without an offsetting loss. For example, some differentiation techniques improve the performance or quality of experience (QoE) for particular applications or classes of applications without negatively impacting the QoE for other applications or classes of applications. The use and development of these techniques has value.
Geddes points out that each application has a quality floor, a threshold level of latency and loss that bounds its ability to function. We care less about peak bandwidth than about the minimum quality across a transaction, an application-level activity with a beginning and an end. But regulation still focuses on peak speeds.

Commercial Internet Pricing is Quality Based

When businesses buy Internet transit services, their contracts tend to specify volume, delay, and loss rather than speed. This is the norm because these are the factors that affect the costs of service provision.

These factors also define service quality. Once the provider promises to deliver a volume of traffic at set levels of delay and loss, connection speed becomes one of many tools to accomplish the goal.

When usage, delay tolerance, and loss tolerance are all unknowns, we fall to an unknown level of quality. While this simplifies billing, it doesn’t do justice to the needs of applications, innovation, or investment.

A side effect of switching from the current billing model to a quality-based model is that the unproductive net neutrality debate summarily ends. When users have control over the end-to-end quality of each application transaction, the means used by the provider to deliver the desired quality are unimportant.

Despite Professor Frischmann’s intervention, I still don’t think net neutrality is important.

[Disclosure: As a member of Singapore’s Economist and Regulatory Advisory Board, I have done paid consulting work for the Republic of Singapore on multiple occasions.]

 

The post Congestion Pricing for Infrastructure: I Still Don’t Know Why Net Neutrality is Important appeared first on High Tech Forum.

Microsoft Closes Digital Divide! Heh, Just Kidding

$
0
0

Happy Prime Day! Here’s one special deal you don’t want to buy: Microsoft’s grand plan to bring high speed broadband to the less-populated fringe of rural America for peanuts. It sounds appealing, but it has some major issues.

Microsoft is still promoting TV White Spaces, a speculative system that uses unlicensed spectrum to build wide area networks. Because spectrum is free, White Spaces advocates claim it’s a marvelous system for sparsely-populated areas.

But free spectrum is a barrier to network investment. Nobody wants to spend money on equipment that won’t work because too many others are on the same bands. Equipment for unlicensed networks needs to be dirt cheap.

Constant Demand for Bandwidth

The history of TVWS primarily consists of advocates petitioning regulators for more and more spectrum allocations. Microsoft wants the FCC and Congress to set aside three TV channels in every market – even the urban markets where spectrum is scarce.

The broadcasters are rolling on the floor laughing at this request:

Microsoft is currently reminding fans why some sequels should never be made. The latest entry in the tech giant’s Vacant Channel franchise is yet another heist movie based on a con game that’s too clever by half.

According to Microsoft, it is urgent that the Federal Communications Commission reserve a vacant UHF white space channel in every market nationwide following the post-auction repack of broadcast television stations, and Microsoft maintains this reservation can be accomplished without causing harm to television stations.

That’s nonsense on its face. The proposal is either unnecessary, because there will be plenty of spectrum, or it is harmful, because there will not be enough.

It appears that NAB is not a booster.

Microsoft’s Dubious Accounting

Microsoft commissioned a study from Boston Consulting Group that’s meant to prove TVWS is the best technical solution for areas with particular population densities:

New directional findings by The Boston Consulting Group suggest that a combination of technologies utilizing TV white spaces are the most efficient technologies to connect areas populated at densities from two to 200 people per square mile.
As the population thins, satellite becomes the most cost-effective solution because the infrastructure costs of building towers make TV white spaces, or any terrestrial wireless technology, less attractive.
In higher-density rural areas, higher-frequency 4G LTE technologies become the most cost-effective option.
It doesn’t appear that Microsoft asked their consultants to examine LTE-Unlicensed. This is unfortunate because it makes a lot of sense in the locales identified as TVWS candidates. LTE-U is also less expensive to deploy, therefore much more practical for the application.

Duplicative Solutions

Unlicensed wide-area networks have a history of failure. Advocates have been touting TVWS for 15 years, and all that’s come of it is some demonstration networks in some of the world’s poorest nations.

While policy wonks have been touting TVWS in DC, real networks in rural areas around the world run on satellites, Wi-Fi, WiMax, and 3GPP technologies such as 3G and LTE.

Spectrum is available in rural areas for unlicensed use already, because there is low demand for TV broadcast channels and 3GPP networks. So there’s no need for regulators to make more available.

The Real Problem is Equipment Cost

IEEE 802.22 is a reasonable approach to wide-area networks over unlicensed spectrum. It uses a technical approach similar to UWB, DOCSIS, and WiMax that divides channels into time slots and limits contention.

But there is low demand for 802.22 equipment, which leads to limited choices for buyers and high prices. The money saved by not having to pay for spectrum licenses is thus lost to high equipment costs.

The obvious solution is to design TVWS systems around standard technologies such as LTE-U. LTE-U interconnects transparently to standard LTE networks, so the cost equation is very good.

A Wildly Speculative Plan vs. a Practical Plan

TVWS advocates believe they can solve their cost problem by pushing TVWS equipment into crowded cities. The alternative is to pull LTE-based technology into rural networks.

It’s obviously much more practical to use technology already supported by an established industry than to deploy new technology. When the new tech has limited appeal, the decision is a no-brainer.

TVWS is folly. The best technical solution for 2 – 200 people per square mile rural locales is LTE and LTE-U. TVWS is too little, too late.

The post Microsoft Closes Digital Divide! Heh, Just Kidding appeared first on High Tech Forum.

Progress in the Debate over TV White Space

$
0
0

Tuesday (July 11, 2017), Microsoft unveiled their current vision for unlicensed radio services in the TV White Space (see “Microsoft calls for U.S. strategy to eliminate rural broadband gap within 5 years”.)

Progress

Microsoft’s proposal demonstrates progress in the debate over use of the TV White Space.  A decade ago, proponents of unlicensed operation in the TV White Space stated that it “will transform every aspect of civil society.”  (No cite given because I don’t want to be cruel.  But, search engines are your friend.)  Yesterday, Microsoft made available a white paper about White Space that asserted that “Overall, TV white spaces technologies appear to be the optimal solution for a little more than 19 million people . . .”

Nineteen million people is a tad less than 6% of the population.  Toning down the rhetoric in support of unlicensed use of the TV White Space from “transform civil society” to “provide a more cost-effective alternative for 5.8% of the market” is real progress.  (The quote regarding 5.8% is my statement, not Microsoft’s.)

Doubts Remain

Still, I doubt that unlicensed operation in the TV White Space would “provide a more cost-effective alternative for 5.8% of the market.”  The Microsoft white paper asserts that using TV White Spaces could provide access to 23.4 million people in rural areas for a cost in the range of $10–15 billion; it also asserts that using commercial wireless in the 700 MHz band would cost $15–25 billion to provide the same coverage.  (Microsoft White Paper at pp. 12-13.)  Thus, Microsoft asserts that using 700 MHz spectrum is projected to cost about 1.5 times as much as using the White Space.  Microsoft cites a study by the Boston Consulting Group for these cost estimates; I could not find a copy of that study or a description of its methodology.

However, the Microsoft white paper drops some hints about the methodology.  Hint number 1: they note that most TV White Space operations will be on TV channel 37 (608–614 MHz) or in the duplex gap (652–663 MHz).

Hint number 2: they state (incorrectly—more about that later) that a TV White Space signal can travel four times the distance of a 2.4 GHz Wi-Fi signal.  Well four times the distance is exactly what one would calculate if one used the Friis Transmission Formula to compare signal strengths and assumed that the TV White Space system operated at 600 MHz and the antennas in both systems had equal gain.  Microsoft’s assertion that signals in the TV White Space travel four times as far as 2.4 GHz Wi-Fi signals would not apply if the Wi-Fi system had an outdoor antenna as big as the antenna of the TV White Space device; here’s an example of such an antenna.

Using a big antenna like that the Wi-Fi signal received at the residence would be just as strong as the comparable White Space signal;  the uplink Wi-Fi signal from the residence to the base station would be much stronger than the White Space signal.

But the Friis formula may explain the difference found in the Boston Consulting Group study between the cost of rural coverage using TV White Space and the cost of using 700 MHz LTE.  If one were to apply the Friis transmission formula to compare the coverage of a 700 MHz LTE system with that of a 600 MHz TV White Space system in the same way that Microsoft applied it to Wi-Fi versus a TV White Space system, one would calculate that a base station in a TV White Space system would have about 1.4 times the coverage area of a base station in a 700 MHz LTE system.

If rural America could be covered by 10,000 TV White Space cells, it would require 14,000 700 MHz LTE cells to give the same coverage.  This simple calculation leads to a conclusion (national coverage using 700 MHz systems would cost 1.4 times more than would using TV White Space systems) that is essentially the same as the Boston Consulting Group study’s conclusion that using 700 MHz systems would cost about 1.5 times more.

The analysis presented above is incorrect for at least two reasons.  First, the Friis Formula is probably not the appropriate propagation model in this context.  Second, even if it were the correct model, the conclusion would be wrong because of an omitted factor.  Wireless coverage depends on both signal strength and bandwidth.  Wireless carriers have access to 84 MHz of spectrum in the 700 MHz band—more than four times the 18 MHz of white space that Microsoft hopes will be available at all locations.   In many circumstances, that extra bandwidth would more than compensate for any slight difference in signal attenuation. Consequently, systems operating in the 700 MHz band should be more effective alternatives than is suggested by Microsoft’s analysis.

A Missing Competitor

One important factor that is not addressed in the Microsoft white paper is use of the 70 MHz of spectrum recently auctioned off by the FCC.  This spectrum brackets the duplex gap and is essentially adjacent to channel 37.  For all practical purposes, the propagation characteristics of this spectrum are identical to those of the TV White Space.  Using this spectrum, the wireless industry should be able to build systems that match or exceed the performance of systems in the TV White Space in every relevant dimension—range, speed, cost, whatever.

Was Interference Considered?

I could not find any mention of possible interference to systems operating in the TV White Space in the Microsoft white paper.  But, unlicensed spectrum can be put to many uses.  If the TV White Space performs as well as Microsoft describes, it would be a great band for surveillance cameras and telemetry equipment.  Of course, if a car dealer across the street from a cell site serving a rural area were to install several surveillance cameras operating in the duplex gap, the nearby White Space cell site would not be able to receive signals from their users in the duplex gap.

Deployment of many short-range systems in the TV White Space would make the economics of providing Internet access in the TV White Space far worse than is projected in the Microsoft white paper.  Certainly, any reasonable analysis of the cost and coverage of TV White Space service should consider the risk of such interference and should note that systems operating in the 70 MHz of licensed 600 MHz spectrum does not face a similar risk of unpredictable interference.

Satellite Improvements

A second sign of progress in the debate is that the Microsoft white paper model looked at a mix of technologies and tried to identify the most cost-effective technology for each potential user rather than impose a single, one-size-fits-all solution.  Microsoft recognizes that satellite-based Internet access is the least costly alternative in the lowest density areas.  I could not determine whether the Microsoft analysis was based on the cost and capacity of the satellites in service today or of those that will be in service five years from now.  Satellites are improving rapidly.

ViaSat 2 was launched last month. ViaSat 2 will have a throughput of 300 Gbit/sec—more than twice that of ViaSat 1 launched less than six years ago.  ViaSat has started construction of ViaSat 3, which is expected to deliver about 1200 Gbit/sec but will cost about the same as ViaSat 2 to build and launch.  (See Viasat’s 2016 annual report, page 30.)  Over about a decade, the cost of satellite capacity will have fallen by more than a factor of six.

If Microsoft’s analysis is based on the capabilities and cost of the satellites operating today, then it is based on the performance of systems two generations behind those that will be in operation in 2022.

Consider a system with key parameters similar to those of ViaSat 3—capacity of 1200 Gbps and a cost of $700 million—call it SatX.  Microsoft asserts that there are 24.3 million people in rural America that need affordable Internet access.  If one assumes that there are 2.53 people in a household, then there are 9.6 million households needing such Internet access.  Three SatX satellites would have a capacity of 375 kbps/household.

Of course, people don’t use their Internet access all the time or constantly at peak speeds.  If one further assumes that the ratio of peak to average is 100, then a constellation of three SatX satellites could deliver a 37 Mbps (downlink) service to all 24.3 million people.

Of course, 9.6 million home terminals would be needed—those would cost a few billion more.  $10 billion—the low end of Microsoft’s estimate of the cost of providing rural Internet access using the TV White Space—would buy three SatX satellites and leave $7.9 billion for home terminals, or about $800 per home.  $15 billion, the high end of Microsoft’s estimate of the cost of providing rural Internet access using the TV white space alone would buy a five-satellite constellation delivering a 60 Mbps service and leave $1,200 per home to pay for terminals.  An Excel spreadsheet showing the details of the above calculations and that allows one to analyze alternative assumptions is available here.

It appears to me that it is highly likely that next generation satellites will be the low-cost alternative for serving many or most of the 19 million that Microsoft identifies as best served by systems using the TV White Space.  It also appears likely that the cost comparisons presented by Microsoft are based on last year’s satellite technology, not on the satellite technology that will be available in 2020 or 2022.

Vacant Channels

Microsoft supports keeping one TV channel vacant in every market.  Doing so will have only a minor effect on the amount of TV White Space in many rural areas—even after repacking, Montana and South Dakota will have lots of White Space.  Constraining the repacking to keep White Space available in the Northeast Corridor or Los Angeles will not improve rural Internet access.

Keeping a TV channel vacant in a location that would otherwise not have a vacant channel would increase the spectrum available at 600 MHz and below for such access services from 82 MHz to 88 MHz—an increase of less than 10%.  (That 82 MHz is the combination of 70 MHz licensed and 12 MHz unlicensed.)

Bottom Line

It is pleasing to see an advocate for unlicensed operation in the TV White Space recognize that such systems can serve only a small market.  Similarly, recognition that satellites are the least costly solution for the most rural areas is refreshing.

However, the facts that Microsoft’s white paper (1) does not consider service provided by wireless carriers using 600 MHz spectrum; (2) fails to address the problems of interference among TV White Space users; and (3) appears not to have taken into account improvements in satellite technology, indicate to me that one should give relatively little weight to their conclusions about the costs of systems operating in the TV White Space relative to the costs of commercial wireless or satellite systems.

Similarly, Microsoft’s assertion that the FCC should force the upcoming repacking of TV stations to keep an additional White Space channel available everywhere—no matter the cost in impaired TV coverage—does not seem justified.

More detail, somewhat dated now, on this issue is available in two papers by Robyn, Jackson, and Bazelon: “Unlicensed Operations in the Lower Spectrum Bands: Why is No One Using the TV White Space and What Does That Mean for the FCCs Order on the 600 MHz Guard Bands?” and “Unlicensed Operations in the 600 MHz Guard Bands: Potential Impact of Interference on the Outcome of the Incentive Auction”.

Conflict of interest disclosure.  Dr. Jackson has had clients in this area in the past.  However, he has no current projects with any of those clients and wrote this on his own initiative without any prompting from such former clients.

 

The post Progress in the Debate over TV White Space appeared first on High Tech Forum.

EFF’s Engineers Letter Avoids Key Issues About Internet Regulation

$
0
0

One of the more intriguing comments filed with the FCC in the “Restoring Internet Freedom” docket is a letter lambasting the FCC for failing to understand how the Internet works. The letter – organized and filed by the Electronic Frontier Foundation – recycles an amicus brief filed in US Telecom’s legal challenge to the 2015 Open Internet Order. The FCC letter fortifies the amicus with new material that beats up the Commission and stresses the wonders of Title II.

The new letter is signed by 190 people, nearly four times as many as the 52 who signed the original letter. Some signers played important roles in the development of the early Internet and in producing the 8,200 or so specification documents – RFCs – that describe the way the Internet works. But many signatories didn’t, and some aren’t even engineers.

The letter includes a ten page (double spaced) “Brief Introduction to the Internet”, a technical description of the Internet that seeks to justify the use of Title II regulations to prevent deviation from the status quo. The letter mischaracterizes DNS in a new section titled “Cross-Layer Applications Enhance Basic Infrastructure”. More on that later.

EFF Offers Biased Description of Internet Organization

Any attempt to describe the Internet in ten double-spaced pages is going to offer a big target to fact-checkers. This is only enough space to list the titles of the last six months worth of RFCs, one per line. But we do want something more than an Internet for Dummies introduction from the people who offer their expert assessment of the facts as justification for a particular legal policy. Unfortunately, the letter comes up short in that regard.

The first two and a half pages simply say that the Internet is composed of multiple networks: ISPs, so-called “backbones”, and edge services. These networks connect according to interconnection agreements, which the letter somewhat incorrectly calls “peering arrangements”.

This description leaves out the transit networks that connect small ISPs to the rest of the Internet (for a fee) in the absence of individualized peering agreements. This is an important omission because the overview sets up a complaint about the tradition of requiring symmetrical traffic loads as a condition to settlement-free peering.

When transit networks interconnect, settlement-free peering requires symmetrical traffic loads because asymmetry would imply that one network is providing transit for the other. Because transit is a for-fee service, it obviously would be bad business to give the service away for free. The letter fails to mention this, but insists on pointing out that ISPs provision services asymmetrically.

This misleading description of the organization of the Internet is later used to justify settlement-free interconnection to ISP networks by the large edge service networks operated by Google, Facebook, Netflix et al.

EFF Provides Misleading Discussion of Packet Switching and Congestion

The Internet is based on a transmission technology known as packet switching, which the engineers describe by referencing an out-of-print law school textbook, Digital Crossroads: American Telecommunications Policy in the Internet Age, by Jon Nuechterlein and Phil Weiser. Nuechterlein and Weiser are both very bright lawyers – Jon is a partner at Sidley Austin and Phil runs the Silicon Flatirons Center at the UC Boulder Law School – but this is probably the first time in history that a group of engineers has turned to a pair of lawyers to explain a fundamental technology for them. (This reference leads me to believe the letter was written by EFF staffers rather than by actual engineers.)

The description of packet switching omits three key facts:

  1. Packet routers are stateless devices that route each packet without regard for other packets;
  2. Packet switching is a fundamentally different transmission technology than circuit switching, the method used by the telephone network; and
  3. Packet switching increases the bandwidth ceiling available to applications at the expense of Quality of Service.

The omitted facts would have been helpful in explaining congestion, an issue that the letter combines with its description of packet switching in a way that makes it appear arbitrary. Packet switched networks are provisioned statistically, hence any network that is not massively over-provisioned will undergo periodic congestion. Therefore, any well-designed packet network must include the capability to manage congestion.

Internet Congestion Management is a Troubled Topic

In the case of the Internet, congestion management is somewhat troubled topic. It was originally addressed by Vint Cerf through a mechanism called Source Quench, discarded because it didn’t work. Quench was replaced by the Jacobson Algorithm (a software patch consisting of two lines of code), which was at best a partial solution.

Jacobson’s patch was supplemented by Random Early Detection, which didn’t solve the problem entirely either. The current status quo is Controlled Delay Active Queue Management (CoDel), a somewhat less than ideal system that seeks to manage transmission queues more accurately.

The letter claims “the sole job [of routers] is to send packets one step closer to their destination.” This is certainly their main job, but not their only one. Routers must implement the Internet Control Message Protocol (ICMP) in order to support advanced routing, network diagnostics, and troubleshooting. So routers also have the jobs of helping administrators locate problems and optimizing traffic streams.

Internet tools such as traceroute and ping depend on routers for packet delay measurement; networks also rely on ICMP to verify routing tables with “host/network not reachable” error messages and to send ICMP “Redirect” messages advising them of better routes for packets with specific Type of Service requirements. The Redirect message tells the sending computer or router to send the packet to a different router, for example. So the sole job of of routers is to implement all of the specifications for Internet Protocol routers.

Two router specifications that go unmentioned are Integrated Services and Differentiated Services. The letter makes no mention of Source Routing, a system that allows applications to dictate their own paths through the Internet. While Source Routing is rare, IntServ is used by LTE for voice and DiffServ is used within networks for control of local traffic. Both are likely to play larger roles in the future than they’ve played in the past.

EFF Makes False Claims about Best Efforts

The letter claims all Internet traffic is sent at the same baseline level of reliability and quality: “Thus the Internet is a “best-effort” service: devices make their best effort to deliver packets, but do not guarantee that they will succeed.”

The Internet Protocol is actually more of a minimum effort system that lacks the ability to perform retransmission in the event of errors and congestion, but the Internet as a whole provides applications with very reliable delivery, at least a 99.999% guarantee. But it does this because all the networks cooperate with each other, and because software and services cooperate with networks. Hooray for TCP!

The term “best efforts” needs a better definition because it means two things: 1) The lack of a delivery confirmation message at the IP layer; and 2) the expectation that quality will vary wildly. The first is a design feature in IP, but the latter is not. Variable quality is a choice made by network operators that is actually a bit short of universal.

EFF’s Views on Layering are Mistaken

After discussing packet switching in a patently oversimplified way, the letter goes utterly off the rails in attempting to connect the success of the Internet to design principles that don’t actually exist. It sets up this discussion by offering a common misunderstanding of network layering: “the network stack is a way of abstracting the design of software needed for Internet communication into multiple layers, where each layer is responsible for certain functions…”

Many textbooks offer this description, but network architects such as John Day dispute it. In Day’s analysis, each layer performs the same function as all other layers, but over a different scope. That function is data transfer, and the scopes differ by distance. A datalink layer protocol (operating at layer two of the OSI Reference Model) transfers data from one point to one or more other points on a single network. Wi-Fi and Ethernet networks within a home or office are layer two networks that do this job incredibly well; they’re joined into a single network by a network switch with both Ethernet ports and Wi-Fi antennas.

A layer three network – using IP – transfers data over a larger scope, such as from one point on the Internet to one or more other points on the Internet. This is the same job, but it goes farther and crosses more boundaries in the process. So layering is more about scope than function.

There is a second misunderstanding with respect to cross-layer interactions. The EFF letter says the code implementing layers needs to be: “flexible enough to allow for different implementations and widely-varying uses cases (since each layer can tell the layer below it to carry any type of data).” This description reveals some confusion about the ways that layers interact with each other in both design and practice. Standards bodies typically specify layers in terms of services offered by lower to higher layers and signals provided by lower to higher layers.

For example, a datalink layer may offer both urgent and relaxed data transfer services to the network layer. If the design of the protocol stack is uniform, this service option can percolate all the way to the application. So applications that need very low latency transmission are free to select it – probably for a higher price or a limit on data volume over a period of time – and applications that are indifferent to urgency but more interested in price are free to make an alternate choice. The actual design of the Internet makes this sort of choice possible.

EFF Misconstrues the End-to-End Argument

From the faulty description of layers, the EFF letter jumps right into a defective explanation of the end-to-end argument about system design, even going so far as to call it a “principle”:

In order for a network to be general purpose, the nodes that make up the interior of the network should not assume that end points will have a specific goal when using the network or that they will use specific protocols; instead, application-specific features should only reside in the devices that connect to the network at its edge.

There’s a difference between “general purpose” technologies and “single purpose” ones that the letter doesn’t seem to grasp. The designs of the datalink layer, network layer, and transport layer protocols don’t assume that applications have the same needs from the network. Hence, the Internet design reflects a multi-purpose system designed to accommodate the widest possible set of use cases. This is why there is both a Transmission Control Protocol and a User Datagram protocol at layer four. The letter describes them accurately as a mode of transmission that values reliability and correctness (TCP) and one that values low latency (UDP).

Elements of Every IP Datagram

For these two transmission modes to work correctly, lower layers – IP and the datalink – need to have the ability to tailor their services to different needs. We can certainly do this at the datalink layer, which the letter fails to describe. Datalink services in Wi-Fi and Ethernet offer the options for urgent and relaxed delivery. These modes are accommodated by the Wi-Fi 802.11e Quality of Service standards and Ethernet 802.1p options.

The IP layer’s ability to request Quality of Service from the datalink layer is implemented by the Type of Service field in the IP datagram header, and also by the IntServ and DiffServ standards. ToS is also a property of routes, and is supported by ICMP, a required feature of IP routers. So the Internet is a multi-purpose rather than a single purpose network.

You can still stress your belief in the end-to-end “principle” without pretending that the Internet lacks the ability to tailor its internal service to specific classes of applications. All the end-to-end argument really says is that it’s easier to develop applications that don’t require new features inside a network. It doesn’t say you have to pretend one size fits all.

In fact, the classic paper on end-to-end arguments (cited by the EFF letter) acknowledges the role of intelligence inside the network for performance reasons:

When doing so, it becomes apparent that there is a list of functions each of which might be implemented in any of several ways: by the communication subsystem, by its client, as a joint venture, or perhaps redundantly, each doing its own version. In reasoning about this choice, the requirements of the application provide the basis for a class of arguments, which go as follows:
The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the end points of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible. (Sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement.)
We call this line of reasoning against low-level function implementation the “end-to-end argument.”

So we don’t need to pretend that a perfectly dumb network is either desirable or even possible in all application scenarios.

What Makes a Network “Open”?

Let’s not forget that the goal of the FCC’s three rulemakings on the Open Internet is to provide both users and application developers/service providers with easy access to all of the Internet’s capabilities. We can do that without putting a false theory of the Internet’s feature set in place of a true one. The EFF and its friends among Internet theorists offer us a defective view of the Internet in terms of packet switching, congestion, router behavior, layering, and the end-to-end argument in order to support a particular legal/regulatory argument, Title II classification for Internet Service.

Title II may very well be the best way to regulate the Internet, but I’m suspicious of any argument for it that does violence to the nature of the Internet as the EFF letter has clearly done. EFF’s omissions are troubling, as there’s a very straightforward argument for classifying datalink service under Title II even though Internet Service fits better in Title I.

I’ll have more to say about this issue before I file reply comments pointing to these and other errors in the EFF letter.

UPDATE: For more on this letter, see the next post.

The post EFF’s Engineers Letter Avoids Key Issues About Internet Regulation appeared first on High Tech Forum.

Internet Pioneers Discuss Network Architecture and Regulation

$
0
0

This special podcast is the audio portion of  a webcast with three special guests that deserve the title of Internet Pioneers: Tom Evslin, the founder of ITXC, the first company to transport phone calls over the Internet and owner of the outstanding Fractals of Change blog; John Day, the author of Patterns in Network Architecture and the manager of the team that created the layered OSI model of network architecture; and Barry Shein, founder of The World, the first commercial ISP and one of the 11 People Who Made the Internet Possible.

We discuss the issues at stake in the FCC’s Restoring Internet Freedom docket that proposes to unwind the 2015 FCC classification of the Internet under regulations designed for the common carrier telephone network.

Here are some of the reactions from Twitter while the webcast was in progress.

Tom explained the difference between telephone call routing and Internet packet routing:

Barry and John discussed the value of zero-rating for lowering barriers to Internet participation in developing countries:

Tom pointed out that regulating non-monopoly businesses as if they were monopolies prevents innovation:

We all broke down the difference between the circuit-switched telephone network and the packet switched Internet in some depth:

Tom stressed the importance of light regulatory touch for stimulating network innovation as well as application innovation:

John mentioned the Internet is missing two layers of architecture compared to other packet switched networks and the advocates of Title II don’t know it. Barry pointed out that the ability to commit mischief isn’t limited to ISPs:

I opined that net neutrality is more of an incumbency protection racket than a stimulant to innovation:

As much as we may pretend, the Internet is far from perfect and still needs a lot of work:

Tom explained that heavy regulation on any industry helps incumbents more than upstarts:

And the climax was Barry’s comparison of the Title II debate to the Game of Thrones:

You can see the video on our Facebook page and on YouTube now.

 

The post Internet Pioneers Discuss Network Architecture and Regulation appeared first on High Tech Forum.


Helping the FCC Get Broadband Right

$
0
0

The FCC’s annual inquiry on the state of US broadband is underway and we’re here to help. This process, mandated by federal law, seeks to discover whether advanced networks are being deployed across the nation. If the FCC finds they aren’t – if it finds gaps, for instance – it’s also required to take deregulatory steps to accelerate progress. The fundamental question is “whether advanced telecommunications capability is being deployed to all Americans in a reasonable and timely fashion.”

What Data Should the FCC Examine?

The FCC collects “Form 477 data” from carriers to map deployment, but this data has some limitations. It asks carriers to report on the services they offer by census tract, but census tracts are not households or business addresses. Census tracts (or “blocks”) are also not the same size, either in area or in population.

We’re more likely to find deployment gaps in rural areas, where the problems with census tract granularity are most severe. Rural tracts tend to be twice as large and less than half as populous as urban ones, for example.

Census tracts also have hard boundaries, which wireless systems don’t respect. There may be wireless coverage in areas where network planners don’t expect it to go, and there may not be coverage in all the areas where they do.

So there’s an inherent problem with using a grid based on political boundaries instead of starting with a coverage map and finding out how many people live in the bright colors. The census block approach is wireline-oriented, but we’re in the mobile first world now. So I would ask for coverage maps and them map them on population maps to count served and unserved people.

What About the Rest of the World?

Form 477 is also not helpful for international comparisons because other nations don’t necessarily take such an approach. International coverage comparisons are often drawn from OECD data, but nations tend to report their idiosyncratic measurements instead of taking a fresh look with a consistent approach.

This may not matter given that global data sets such as Akamai’s State of the Internet and Ookla’s Speedtest Global Index are the gold standards for nation-to-nation speed assessment. The nations with a serious commitment to advanced networks also use SamKnows testing, as we do.

So a comparison with all the other SamKnows nations would be meaningful once we apply necessary corrections for densely populated nations with very little rural population. As it turns out, the nations with high scores have very few rural residents as a percent of total population. They also have very high rates of multiple-dwelling unit housing, very cheap to serve.

But the traditional measurements of broadband speed, price, coverage, and use aren’t very useful.

What’s the Benchmark?

The big controversy is about the benchmarks the FCC sets for the definition of “true broadband”. The current threshold – I call it the Wheeler Standard – is 25 Mbps down and 3 Mbps up. If a rural provider offers 10 down and 1 up they can still get a subsidy, but the offering won’t show up in the FCC report as True Broadband.

The Wheeler Standard has no real function but to make it appear that we don’t have a competitive broadband market. A regulatory standard whose value is solely political is an abuse of regulatory power and should be abandoned. But you can’t revise a political standard without a political justification, can you? Apparently we’re stuck in politics if we keep the Wheeler Standard and also if we abandon it.

That’s unacceptable, so we need to think about what the broadband standard should be if Tom Wheeler had never been born. The law says the focus of the inquiry is on advanced networking capability, which suggests Congress is more interested in what we can do with our networks than with arbitrary labels.

Last Year’s Comment

A year ago, I filed comments with the FCC suggesting five priorities:

  1. Develop a coherent methodology
  2. Use clear terminology
  3. Consult public research
  4. Stick to the subject matter
  5. Focus in rural America

The FCC ignored this recommendation, but a lot has happened since. I think last year’s benchmark suggestion was correct:

This report to Congress on the state of broadband in the US will be the twelfth in the series. Regardless of its contents, we will not be able to examine the entire series as a unit to observe trend lines in any coherent way. This is odd because trend lines moving in the right direction are the hallmarks of progress.

This sad state of affairs comes about because every 706(b) report reads as if it were the first ever undertaken. Consequently, my overarching desire is for the Commission to recognize that the 706 report is a continuing obligation that should be discharged in a consistent, coherent, and objective fashion from year to year.

This means creating a methodology that does not require the FCC to create a new magic number for download speed and related metrics every year in order to exclude developments in hard-to-serve communities that are indications of progress. It also means defining “advanced telecommunications capability” in terms of application support rather than as a network-intrinsic capability. And it also means refraining from introducing squishy new network metrics that neither the Commission nor anyone else can measure or evaluate.

So let’s look at the networking requirements of the top applications and web sites and develop a benchmark that enables at least 80-90% of Americans to use all of them. My guess is the 25/3 standard will come off as too high on the download side and too low in the upload side.

It certainly favors cable modem over DSL, fiber, wireless, and satellite, so at least we need to correct that. Three Mbps is too low for any kind of cloud backup or cloud access, and 25 conveniently steps on 24 Mbps DSL offerings. It’s also just 2 Mbps above the average speed of mobile in the US. The 25 Mbps benchmark gives off a mighty stench.

What are the Roles of Wireline, Mobile, and Fixed Wireless Technologies?

The law directs the FCC to consider: “advanced telecommunications capability . . . without regard to any transmission media or technology, as highspeed, switched, broadband telecommunications capability that enables users to originate and receive high-quality voice, data, graphics, and video telecommunications using any technology.”

This leads to great deal of quite pointless discussion every year about whether wired and mobile are substitutes or complements. The real difference between the two is more about marketing than about technology.  Mobile data plans are capped at 20 – 25 GB/month, even if they’re sold as unlimited; wired data plans are capped in the 100s of gigabytes per month, or not at all.

For the time being, wired is the Netflix enabler but mobile is the network we really, truly want all the time. When consumers are strapped for cash and have to give up one of their two ISPs, the wire goes first.

So mobile is the primary connection and wired is an accessory that keeps the bill under control. As we move into 5G and caps grow higher, wired network connections to homes and small businesses will simply fade away except for special purposes.

So the report should enable us to see how fast our progress is toward pervasive, high-speed, mostly uncapped mobile connections. It’s not hard to figure out how to get there once you know where you’re going.

The Role of the Internet

The FCC and Congress have placed too much emphasis on broadband network price and performance progress and too little on the web and the Internet. Our wired networks are way over-built, especially so in consideration of their role as feeders to 5G and Wi-Fi.

All of our performance measurements should include a large dose of web page loading speeds and QoE bandwidth for non-web applications like video chats, voice, and IoT. The Measuring Broadband America program and the International Broadband Data Reports should be merged into the 706 report to provide a comprehensive picture.

And the overall report should emphasize applications at least as much as networks. We want to know if applications and networks are truly goosing each other as the former FCC imagined they do.

If they aren’t, we need to know why. All in all, the 706 report has degenerated into a political exercise in the last decade, more concerned with supplying advocates with talking points than with charting a course for industry development. It will probably take another year or two to get it back on course, but we can start now by clarifying our goals.

Clearing the Clutter

In 2017, “advanced telecommunications capability” means 5G wireless networks. One way or another, the FCC’s report should examine the rate of 5G deployment and the actions it can take to make the rollout as fast and painless as possible.

5G is going to make the edge networks of today – DSL, cable, FTTH and 4G – obsolete and irrelevant. Full deployment requires new chips, devices, and software. It also means permitting for small cell builds with backhaul to the legacy network.

And it means enriching the infrastructure for CDNs and providing QoE for 5G applications. Rather than spinning another round of the old networking battles over thresholds and “neutral networks” that don’t do what we need, the FCC should focus on the things it can do to resolve regulatory obstacles to universal 5G.

If it does this, the legacy wireline stuff will take care of itself.

The post Helping the FCC Get Broadband Right appeared first on High Tech Forum.

The Internet After Net Neutrality

$
0
0

As we approach the FCC’s repeal of its Obama-era net neutrality regulations, boosters are panicking. Chairman Ajit Pai has made a bold decision, moving enforcement of unfair and deceptive broadband trade practices to the Federal Trade Commission. The implications are sinking in: Pai has called the net neutrality movement’s bluff and they want to delay the Commission’s vote.

Prophesies of the end of the Internet as we know it will now be tested, and they’re probably going to fail: ISPs are not going to offer pay-to-play fast lanes to websites. This has long been clear to networking geeks because the web comes nowhere close to using the speed ISPs make available to it already.

The Web Performance Gap

The average speed of America’s fixed-line broadband connections is 75 Mbps (according to Speedtest.net.) If ISPs controlled how quickly we receive content, we would expect web pages to load at speeds close to these measured averages. But they’re nowhere close.

According to the FCC’s measurements, typical web pages won’t load faster than 12 – 15 Mbps regardless of network speed. Changing from a 15 Mbps plan to a gigabit subscription will make gaming and video editing faster, but it won’t do a thing for mainstream video streaming and the web.

No web-based service is going to pay ISPs to speed up parts of the Internet that consistently outperform their own equipment. Some apps will benefit from – and pay for – higher quality, but not the web.

Speeding up the Web

When Internet companies want to deliver their content faster they pay content delivery networks. CDNs provide more servers placed close to end users than small companies can buy on their own. They do this because overloaded web servers are the speed bumps on the road to a good Internet experience. But net neutrality advocates have taught us to blame ISPs even when they’re not at fault.

ISPs have never offered web acceleration for a fee. They could have offered such a service at any time prior to 2015, but they didn’t because it makes absolutely no sense. ISPs can’t “accelerate” web traffic, they can only forward it to users as fast as they receive it from websites, which isn’t very fast.

Former FCC chairman Tom Wheeler failed to conduct a thorough, impartial analysis of the advocacy claims animating his decision to impose bright line rules on broadband networks. Incidents of misbehavior have occurred, but not so frequently as to warrant pre-emptive regulations. The practices he banned can all be beneficial in certain contexts, rhetoric notwithstanding.

Crying Wolf

The death of net neutrality is an existential threat to the pressure groups who’ve claimed the Internet owes more to federal regulators than to technical innovators and risk-taking entrepreneurs. If the Internet works better next year and the year after than it does today, we’ll know the net neutrality movement has been crying wolf.

By the same token, if it collapses we’ll applaud them for telling us the honest truth and rush to restore Wheeler’s regulations.

Calling the Bluff

In erasing the 2015 order, Chairman Pai is closing a policy debate that has raged in Washington for 15 years and in the engineering community since the 1970s.

Few net neutrality advocates realize that the current Internet regulation argument is an echo of historical engineering discourse about where to locate control points – traffic lights – in computer networks. When computers were slower, it was necessary to choose between more lanes on the information freeway and metering lights.

Creative Synthesis

The technical debate came to an end in the ‘90s when semiconductor chips became fast enough to do both. Modern Ethernet is a synthesis of 1970s Ethernet and its competitor, the IBM Token Ring. The policy community didn’t get the memo on this development and the pressure groups don’t appear to care.

The truth is that networks need to be extremely fast, highly reliable, and very well-behaved. But networks are only one part of the Internet, and not the most troublesome or dangerous one at that. Content, malware, and surveillance are the real trouble spots.

Having it All

We’re paying the price for placing too little emphasis on the health of the overall Internet and too much on the facets – such as broadband speed – easiest to measure. Net neutrality didn’t cause fake news, inattention to the social consequences of user-generated content did.

Net neutrality’s creators and protectors – chiefly, law professors Mark Lemley, Lawrence Lessig, Tim Wu, and Barbara van Schewick – guessed that the Internet’s ideal regulatory paradigm might be inherent in its design. This guess was wrong, but it took 20 years to disprove.

Let’s not be distracted by shiny objects any more. The Internet still has tremendous promise as well as serious problems to solve. Making it better through continuous experimentation should be the top priority.

See my research paper: “You Get What You Measure: Internet Performance as a Policy Tool” for detailed breakdowns of Internet performance factors.

The post The Internet After Net Neutrality appeared first on High Tech Forum.

Community Broadband is Cheaper – and Slower

$
0
0

A recent study by the Berkman Klein Center for Internet & Society at Harvard shows that publicly-funded broadband networks are cheaper – but slower – than those built with private capital. On average, consumers who buy broadband service from a government provider pay $10 per month less than those who patronize commercial providers, but their download speeds are close to 7 Mbps slower, 42.59 Mbps to 49.12. That’s 15%.

The study claims community networks are “value leaders” because its analysis focuses exclusively on consumer prices, excluding both construction costs and speeds. But the actual data discovered by the researchers tells a very different story: government customers pay less simply because they get less. It’s not surprising that slower networks are cheaper, regardless of how they’re financed. (See this spreadsheet I’ve made from the Berkman Klein data for the raw data and the averages.)

The speed difference is substantial: consumers of public broadband are saving $10/month by giving up enough bandwidth to support two Netflix streams. But perhaps they don’t need it because all the network packages in the study are 25 Mbps or faster. The most bizarre feature of the report is its characterization of plans with average download speeds of 43 – 50 Mbps as “entry-level” plans.

The Case for Faster Networks Takes a Hit

The Berkman researchers are fans of community broadband, even in markets with two commercial suppliers of wired broadband, such as Longmont, Colorado, Morristown and Chattanooga, Tennessee, and Issaquah Highland, Washington. They like the fact that government networks don’t offer “initial low “teaser” rates” that rise down the road.

Their primary claim is that the $10 lower monthly rate offered by publicly-financed wireline networks will appeal to today’s broadband non-adopters. But they’re forced to admit that two-thirds of this group is deterred from using the Internet for reasons other than price and gloss over the appeal of “teaser” rates to price-sensitive non-adopters.

International pricing comparisons have long shown that US prices for low-speed broadband are among the lowest in the world. These tend to be extremely slow offers, at or below 10 Mbps. If consumers are more interested in low prices than high speeds, perhaps it’s wise for government policy makers to stop sneering at lower speed broadband packages.

The Berkman Methodology Follows New America

The Berkman researchers – David Talbot, Kira Hessekiel, and Danielle Kehl – followed a methodology developed by New America Foundation (Kehl’s employer)  for its “Cost of Connectivity” reports. Researchers choose a small collection of cities (24 in one NAF study, 27 in this one), obtain prices from easily accessible sources such as newspaper ads and company websites, and then draw comparisons favorable to government-funded networks.

This methodology suffers from a small sample size as well as selection bias. There’s no way of knowing whether the two dozen cities are representative of the whole picture. In this instance, cities were chosen for Berkman by Chris Mitchell, an advocate whose employer consults with cities on community network projects.

The researchers excluded wireless and satellite networks as well as service plans in the sub-25 Mbps range that generally provide consumers with the best price/benefit ratios. In fact, Berkman’s entire discussion of consumer benefit is limited to monthly bills averaged over a four year basis.

That’s a peculiar approach for consumer advocates. Its limited value may explain why NAF hasn’t published a “Cost of Connectivity” report since 2014.

Pitfalls to Avoid in the Berkman Study

Like the NAF studies, the Berkman study’s text fairly boils over with spin. Let’s highlight some of the more creative ways the authors try to avoid acknowledging the message the data itself conveys.

  • First, the survey fails to include information about plans provided by AT&T, Verizon, and Time Warner Cable (now Spectrum) on the grounds that website terms of use prohibit its use or publication. But I can’t find any language in these TOUs that would discourage researchers. AT&T doesn’t want its site scraped by bots, but this was obviously a manual survey. Spectrum’s TOUs are virtually non-existent. The report claims Verizon has a restrictive TOU, but it doesn’t appear on Verizon’s website. This looks like a dodge.
  • The survey found four cases in which the community network was more expensive than the commercial option and one where it was both more expensive and slower. I imagine the taxpayers who paid for the Churchill, NV network that offers 35 Mbps service for $25 more than Spectrum’s 60 Mbps offering are not very happy; the study offers no analysis of this situation.
  • The survey claims: “8.9 percent of Americans, or about 29 million people, lack access to wired home broadband service” based on data found in the FCC’s Restoring Internet Freedom order. The link to the order is broken, which is a shame because of it’s worth reading for the nuance it provides:

    Expanding the mode from wired only to either wired or fixed (not mobile) wireless brings the percentage down to 4.3 from 8.9. Reducing the speed threshold from 25 to 10-24 reduces it even further to 0.1%. So context matters. If 9 percent of Americans can’t get broadband, our existing subsidy programs aren’t working. But a tenth of a percent gap is better than the wired telephone network did at its peak.

  • The report focuses on FTTH networks on the assumption that “fiber will likely be the technology of choice for any new public or private networks…fiber requires the highest up-front investment and installation costs.” Many of us see 5G wireless as the technology of choice for both mobile and fixed residential broadband in the next 2 to 5 years. The exclusion of wireless technologies that perform as well as wired ones is an arbitrary move that distorts the data. In fact, new networks are underway that will offer up to 1 Gbps over 5G.
  • The report’s focus on networks that support 25 Mbps or faster downloads is also arbitrary. The Wheeler FCC did define “broadband” as 25 down/3 up for public relations purposes, but nevertheless continued to subsidize 10 – 24 Mbps networks. If we follow the money we have to admit that even an FCC captured by Netflix behaved as if 10 Mbps were the real definition of broadband.
  • The report’s conclusion doesn’t follow from the data:

    Our study, though limited in scope, contains a clear finding: community-owned FTTH networks tend to provide lower prices for their entry-level broadband service than do private telecommunications companies, and are clearer about and more consistent in what they charge. They may help close the “digital divide” by providing broadband at prices more Americans can afford.

    There is no meaningful comparison of alternatives to government network construction in the study. It is quite probably the case that subsidy programs that encourage non-adopters to use existing networks are more efficient at getting them online than building entire new networks that, at best, only address a third of the overall reason for non-adoption. And we’re too close to 5G to be spending taxpayer money on FTTH networks that have already lost their luster.

  • There is no analysis of the customer service, reliability, and repair characteristics of public networks. Most of the consumer complaints about existing commercial network relate to these issues, so it would be extremely valuable to know whether government can do a better job. I don’t know the answer to this question, but if we extrapolate from our experience with the motor vehicle department and the tax collectors there’s not much basis for optimism.

In Conclusion

Journalists and bloggers were only too eager to tout New America’s “Cost of Connectivity” reports because their message was so simple and so appealing. Price-gouging complaints are the number one staple of consumer-focused journalism in all sectors.

But this report is so poorly done that it basically destroys itself. It would have been wise for the researchers to study Berkman’s Benkler report, “Next Generation Connectivity: A review of broadband Internet transitions and policy from around the world“, written for the National Broadband Plan. That report shows that the pricing strategy employed by US broadband suppliers is below the OECD average for lower speed plans and above it for higher speed plans (see figures 3.23 – 3.27).

It stands to reason that the response of broadband incumbents to price-based competition from government agencies that can offer service below cost would be to concentrate on higher speed offerings. Broadband, like most consumer goods, is either marketed on price or on quality. Oddly, the entry of low-price government networks into competitive markets probably serves to raise the overall price paid for broadband.

But it will take another study to prove that.

UPDATE: The study says municipal networks are up to 50% cheaper than commercial ones, a point that has been picked up by the troll blogs. But the study’s data also says  commercial networks are up to 30% cheaper than munis. Turning to speed, the study data says commercials are up to 150% faster than munis, and that munis are also up to 75% faster than commercials.

I don’t think “up-to” is a very meaningful metric in this instance.

The post Community Broadband is Cheaper – and Slower appeared first on High Tech Forum.

2018 Broadband Deployment Report

$
0
0

After flirting with some major restructuring in the way broadband progress is assessed in the US, Chairman Pai has released a fact sheet that maintains the analytical status quo with one significant exception.

The FCC will continue to measure progress toward the 25 Mbps down/3 Mbps up standard created by the Wheeler FCC. It will also continue to assess fixed and mobile networks separately, but it will chart their progress toward a common speed/capacity goal.

Most importantly, it will judge the rate of progress we’re making as a nation in meeting the goals for everyone. That means it won’t automatically judge anything less than 100% as failure as the Wheeler FCC did.

The Public Relations Value of High Goals

Section 706 of the Communication Act gives the FCC greater powers when the nation is failing to make reasonable progress toward universal deployment than it would have otherwise. Hence, the agency has a conflict of interest in defining the nation’s status.

It was disingenuous of the Wheeler FCC to raise the bar for broadband from 10/1 to 25/3. That action allowed it to declare a policy failure that was instrumental to its argument that broadband needed to regulated under Title II.

It was also widely applauded by the pressure groups who agitate for government-financed and -operated broadband networks. You have to look no further than the recent Berkman Center report on municipal networks to see how that logic goes.

Commercial Networks are More Dynamic

That report discovered a small price difference between commercial and government networks that it could link to greater adoption since price is a factor in non-adoption. The difference is fairly small – $10/month with some creative math about equipment rental fees and one-time charges – and it’s clouded by the fact that the lower priced networks are slower regardless of who supplies them.

But setting the bar for broadband higher than it should be has pubic relations value. Unfortunately, correcting Wheeler’s exercise would produce needless controversy. The nation is progressing toward the new goal rapidly in any case.

Keeping the same standard in place is good for long term analysis of broadband progress.  Ultimately, friends of the more dynamic commercial networks will be able to show Pai’s Title II roll-back contributing to the deployment of faster networks in more places.

Complaining

Because Pai has decided to keep the 25/3 standard and to evaluate fixed and mobile networks separately, his critics don’t have much to complain about. So they seized on the “are we there yet?” evaluation that the chairman adopted.

The law is reasonably clear on the point that Congress wants to be apprised of progress and not just of final victory. Before we reach universal deployment of 25/3 (which we almost have), the bar should be raised again, probably to 50/20.

At that point, we should be well into the 5G rollout, there should be much more FTTH, and DOCSIS will be a 1 Gbps symmetrical system. Urban networks will be far ahead of the FCC’s goal because they already are. Rural is where it matters.

Let’s Fact-Check Twitter

So critics among the pressure groups, their mouthpieces, and the FCC minority attacked Chairman Pai for somehow leaving people behind. Commissioners Rosenworcel and Clyburn made common claims:

These claims are false. The FCC’s June 2016 data showed the number of unserved was down to 15 million and it has certainly fallen since then; see paragraph 124 of the December 2017 RIF order:

FCC Restoring Internet Freedom Order, paragraph 124.

The advocates are ignoring fixed wireless options and claiming fixed wired is the only real broadband.

Recalibrating the Standard

Before the broadband benchmark is adjusted again, the FCC really does need to lay out a methodology for coming up with the numbers. It appears than the 25/3 standard was driven solely by the desire of Netflix to stream 4K video everywhere.

That’s pretty much a bust because very few people – if any – can detect a difference between Full HD, 2K, and 4K video. In my experience, the picture quality of Hinterland in 1080p is as good as Netflix gets. Check it out for yourself and comment to agree/disagree.

The calibration is fairly simple: inventory the major apps, calculate their requirements, add a fudge factor to allow for delivery glitches and you’re good to go. If we did that today, the guideline would probably be 18 Mbps, so 25 is just a rounding error.

In any case, it’s good to have this issue off the table so we can focus on rural and 5G.

The post 2018 Broadband Deployment Report appeared first on High Tech Forum.

Trouble in Fibertown

$
0
0

Orem, Utah is a storied city in the annals of networking because it was the birthplace of Novell, the company that dominated computer networking in the ‘80s and ‘90s. Novell’s Netware products helped transform corporate computing from a centralized, mainframe model to decentralized, PC-oriented model.

Ironically, the company’s assets were recently acquired by the mainframe software company Micro Focus and then merged with the Hewlett-Packard Enterprise Division’s software unit. The demise of Novell was more fascinating than its rise in some ways.

Searching for UTOPIA

In 2002, Orem and group of neighboring cities began discussions that led to the creation of the Utah Telecommunication Open Infrastructure Agency, better known as UTOPIA. UTOPIA issued bonds backed by sales tax revenues with 11 of the 16 original cities participating.

The project broke ground in 2008, and remains about half done. It’s an understatement to say the project has been financially troubled from the start; here’s an analysis from the New York Law School’s Advanced Communications Law and Policy institute (page 75):

The cost of UTOPIA has been very high: factoring in debt service and other payments, the total cost of the network approaches $500 million. Of this, $185 million stems from long-term bond debt; the cost of the infrastructure itself was $110 million. Construction delays and lack of consumer interest required the network to use a significant amount of its bond proceeds to service its debt ($48 million) and make up for operating deficiencies ($27 million).

Having failed to sell the project to the Macquarie company of Australia in 2014, backers now propose to go back to the Orem voters for permission to take on another $40M in debt (including interest) to complete the project within their city at an accelerated rate.

Poster Child for Government Excess

Orem is served by CenturyLink and Comcast with the same options these companies offer in most places: up to 80 Mbps VDSL for the former and 250 Mbps cable modem for the latter. Consequently, the proposal to take on additional debt for UTOPIA is controversial.

Since 2008, UTOPIA has become the poster child for the poorly conceived and poorly executed government owned, taxpayer supported broadband project. It’s now been studied more closely than probably any such project except Australia’s National Broadband Network, a project with a similarly checkered history. The lesson is that building the network of tomorrow today means you have to wait until tomorrow to find customers, and even then they’ll only show up if your crystal ball gazing was correct.

And in these two cases – UTOPIA and the Aussie NBN – it wasn’t. UTOPIA was born in 2002, long before mobile devices had any use for broadband speeds, when TV was delivered over a customized network, and the TiVo was known only to a small following of techno-geeks. The rise of 2G, 3G, and 4G cellular changed all that, and 5G will change it even more.

We Were All so Naïve Once

There was a lot of enthusiasm for FTTH in 2002, most concretely at Verizon where the FiOS network was born. Like UTOPIA, Verizon believed their network was going to be so great that it would gain take rates of 40% right out of the gate and Wall Street would gush with optimism and push Verizon stock to astronomical levels. The company was able to roll out to the most promising markets quickly and economically because it already had poles and conduits in place. It quickly mastered the art of connecting homes to poles and aggressively marketed the product.

But the take rate didn’t materialize for a decade after the initial build, and the product ended up being much more expensive than imagined because the public still sees Internet as part of the double-, triple- or quad-play bundle that also includes TV. As a small player in the TV market, Verizon has never been able to negotiate the kinds of deals that larger players can make.

Inside the home, most of the devices connected to the FiOS network are wireless. So FTTH in practice is actually fiber to the router. Wi-Fi supports up to 1.3 Gbps now, more than enough capacity for the dozens of consumer devices attached to the typical home network now. Wire can scale to hundreds of Gbps, but speeds in that range don’t really matter inside the home even if they’re valuable to network-intensive businesses.

Orem’s Unique History

Orem is an important landmark to those of us who cut our teeth in the networking business in the 80s and 90s because it was the birthplace of Novell. When we designed Wi-Fi in the early ‘90s, Novell departmental file and print servers were the target application. Sadly, Novell failed to appreciate the importance of new (to the public) technologies such as the Internet, Windows 95, and mass-market distribution channels.

It seems that UTOPIA has suffered a similar fate in refusing to appreciate the importance of mobile devices, wireless IoT devices, and the needs of smart electric and transportation grids. Instead of pretending that nothing has changed since 2002, it may well be wise to take stock of today’s realities and plot a new course that builds on everything we’ve learned.

Policy makers in Orem are still engaging in the turn of the century debates about top-end speeds, and some of the ISPs who offer service over UTOPIA offer 10Gbps connections.  Like all wired connections, these services don’t directly support mobility, so their value is limited. But some lawmakers are reluctant to admit that FTTH will be all but completely supplanted by 5G networks a few hundred feet from the home with fiber backhaul.

Saving the Investment

Verizon has made FiOS into a nice business by making it a complement to its mobile business. Fiber installed for FiOS works quite well as backhaul, so 5G is going to be an easy upgrade in FiOS territory.

While most of the US is not FiOS territory, the learning, the parts, and the volumes will help everywhere. Verizon is set to purchase some 37 million miles of fiber shortly, and most of it is for 5G.

UTOPIA wasn’t designed to complement mobile networks, so it’s unclear how much of its existing infrastructure will be useful. UTOPIA is also shackled by its business model, one that has given rise to more than a dozen boutique ISPs that can’t offer competitive TV plans. UTOPIA homes probably get TV from satellites and skinny bundles.

Accepting the Reality of 5G

One of the options for UTOPIA would involve exploring the options for using its infrastructure for 5G residential service. This is probably few years off, but Verizon and others are running trials for this sort of service today. It’s not too early to begin exploration.

Another option would be to forget about residential service altogether and focus on mobile backhaul. It’s likely that much of the existing fiber plant is in good locations for small cell backhaul. 5G also opens up a lot of “smart grid” and “smart city” applications.

The alternative for UTOPIA is to push forward with $40M in bond debt to wire the other half of the city. Given the low take rate for UTOPIA – less than 28% at present – pushing forward with the status quo doesn’t seem productive.

Users quite likely believe the network is dandy, which it probably is, but that the small ISPs are not what they need to be to run a double-, triple-, or quad-play business in 2018. No amount of taxpayer debt is going to change the fact that networking is an economy-of-scale business.

When faced with the need to either stagnate or grow, Novell chose the status quo path. Let’s hope Orem doesn’t repeat the error with UTOPIA. It might have been a great idea in 2002, but the visions many of us had of networking in those days were blind to the progress that was possible for wireless. That was a serious miscalculation.

The post Trouble in Fibertown appeared first on High Tech Forum.

Cloudflare’s 1.1.1.1 DNS Does Nothing for Privacy

$
0
0

I didn’t pay much attention to the experimental Cloudflare  Domain Name Service (1.1.1.1) until I saw their co-founder & COO Michele Zatlyn describing it on Bloomberg Technology. There’s nothing novel about DNS, regardless of who supplies it. You ask it for the IP address associated with a domain name, and it gives you one relative to where you are. No big deal.

But it becomes troublesome when the over-hyping sets in. Zatlyn claimed that ISPs are free to sell browsing histories today, which isn’t quite true. She also claimed that 1.1.1.1 would speed up browsing, which is dubious. And she claimed using a DNS from a party other than your ISP would hide your web activity from your ISP, which is blatantly false.

How do I know this and why did she say it? Read on.

Do ISPs Sell Your Browsing Histories?

No. ISPs have information about your browsing histories because they know the IP addresses of the sites with which you communicate. Even if the sites you visit use TLS to encrypt the content of your communication, the ISPs can’t route your packets to the right place without a destination IP address. They also know where the elements of the web pages – pictures, snippets of text, videos, ads – come from because they have to route those as well.

This information flows over TCP, a bi-directional protocol. So every piece of information seen by the ISP is also seen by the other party in this two-way communication. So alternative DNS does not prevent the ISP or the destination site from knowing the IP address – and hence the identity – of the other party. If you want to hide your browsing history from your ISP you need to use a VPN as well as cloaking your DNS queries.

Both ISPs and web trackers know your browsing histories, but neither sells this information. If they did, we would likely see a firestorm of criticism and some lawsuits. How these suits come out is a matter for lawyers to judge, but nobody with the information is currently testing the waters.

Will Cloudflare Speed up Your Web Browsing?

No. They may in fact resolve domain names a few milliseconds faster than your ISP’s DNS, but there’s not going to be enough of a difference for you to notice. The reality of web browsing speed is that it’s determined by the sites as long as your Internet package is faster than 15 Mbps.

We’ve seen steady increases of 30% per year in raw broadband speed over the last eight years, but web page load times have remained stagnant. I presented a paper on this at TPRC last year: You Get What You Measure: Internet Performance Measurement as a Policy Tool.

DNS lookups are not a significant part of web page load time, so eliminating them altogether wouldn’t make any difference. And 1.1.1.1 isn’t the fastest DNS on the market anyway: it depends entirely on where you are. 1.1.1.1 does have nice average speeds, but it’s brand new and has fewer customers. Despite this advantage, there are faster DNSes in Atlanta, New York, Montreal, Frankurt, and other places.

The open, non-profit Quad9 DNS* delivers the quickest absolute lookup times, and it also checks the domain you’re resolving for presence in IBM’s threat database.  Knowing I’m about to visit a malware site is more important to me than shaving 4 or 5 thousands of a second off of the page’s 2 to 3 second load time. That said, Cloudflare is able to resolve its customers’ addresses very, very fast; fast enough to reduce web page load times by a few hundredths of a percent under ideal circumstances.

Will Cloudflare’s 1.1.1.1 DNS Hide Information from Your ISP?

No. As we’ve said, DNS queries are questions that return answers. Since the answers are necessary for all Internet communication, they can’t be hidden. Even though Cloudflare hides the questions you’re asking from the ISPs it can’t hide the answers.

So this is not a meaningful privacy enhancement. It appears to be the sort of product that comes down from the top of the company instead of up from the engineering ranks. Cloudflare’s CEO still feels bad about cutting off his Nazis, and wants to be one of the Internet’s good guys again.

So he’s told his engineers to build this service for image and reputation reasons. It makes sense for CDNs to offer DNS services, of course. But why must Silicon Valley insist on over-hyping every little thing it does? Engineers see things like this as minuscule performance enhancements and nothing more.

If Cloudflare really wanted to improve privacy it would offer a VPN.

*Note: Corrected; initially I said Quad9 was an IBM project.

The post Cloudflare’s 1.1.1.1 DNS Does Nothing for Privacy appeared first on High Tech Forum.

From the Core to the Edge: Perspective on Internet Prioritization

$
0
0

The House Communications and Technology held a great hearing on Internet optimization today. The hearing addressed the core issue in the net neutrality controversy, the ability of ISPs to optimize traffic streams for a fee.

This practice – which is more theoretical than real – has been made into fear fodder by the economic interests behind net neutrality as well as their non-profit enablers. Witness Paul Schroeder of Aira Technology presented his company’s app as a use case for managed wireless traffic.

This app provides visually impaired people with a guide who can describe their surroundings to them. Aira tried doing this with ordinary wireless and found that it didn’t work, but they now use a service from AT&T that brings it to life.

My Testimony

I submitted 40 pages of testimony to the committee that comprises a tutorial on Internet optimization and its uses. My first draft had a factual error – I said Tom Wheeler created the paid prioritization bugaboo in 2015 – and the committee was gracious enough to allow me to correct it.

Here’s a summary of my five minute opening remarks as prepared. I didn’t exactly stick to the script because I don’t know how to do that.

  1. Prioritization has been part of the Internet’s design from the beginning, but is hasn’t always been part of its practice. Type of Service, IntServ, and Diffserv are examples. It was controversial for a time, but those days are behind us.
  2. After 15 years of discussion, we’ve reached consensus on the fact that it’s legitimate for ISPs, CDNs, transit networks, and purpose-built services like Webex to accelerate time-sensitive Internet apps such as enterprise voice and video conferencing.
  3. We also appear to appreciate the power of competition to build cost-effective networks that optimize delay, throughput and reliability for both content-oriented apps and real-time apps.
  4. We realize, I believe, that as Internet routers have become more powerful, the reach of the Internet has grown, and the pool of Internet applications has expanded, prioritization and related Internet optimization techniques such as resource reservation, traffic shaping, and dynamic path selection have become not only commonplace but essential.
  5. This is good because no matter how much capacity networks have, there will always be opportunities to improve. Many are waiting with bated breath for 5G, which promises so much. But I expect 6G will be even more awesome. Technology always marches forward.
  6. I think we appreciate that prioritization mechanisms such as the IEEE 802.11e standard (which I helped design) are beneficial to real-time applications such as voice in the same way that LTE bearers are. The fact that one is provided “free” on closed enterprise networks while the other is sold to all interested parties is irrelevant to their utility.
  7. The remaining disagreements about prioritization and other Internet optimization techniques seem merely to be questions about price and regulatory consistency. I believe it goes without saying that all firms who provide a given service should generally be regulated in the same way.
  8. It’s hard to explain the continual increases in broadband speeds we’ve seen in the US over the last ten years – speed improves 35% per year – without giving some credit to the expectation of profit. The fact that web speeds have stagnated over this period – even declining in 2016 – suggests something is wrong with the web’s financial model.
  9. Leaving the consumer broadband market aside, Internet optimization is important to enterprises that have to connect branch offices to data centers located at corporate headquarters or in the cloud. These systems support telephone calls, video conferencing, software as a service, and access to corporate databases. Traditionally, branch offices have relied on private lines, but public Internet connections are much less costly.
  10. Instead of paying $300/month for a 1.5 Mbps T1 connection to HQ, branch offices can get 50 – 250 Mbps connections for less than $75, with the flexibility to access the Internet from them as well. But they require management to ensure high quality video conferencing and enterprise voice, however. The option to purchase a managed service that prioritizes access to headquarters over Internet access on the same wire hasn’t always been clear; it would be good for Congress to make it so.
  11. If firms cannot specify Service Level Agreements for data services that combine Internet use with private Business Data Services on the same wire, their networking costs will be artificially high and innovation will suffer. Business data is a more competitive market than consumer internet is in the pre-5G era; this market can probably police itself.
  12. The Internet is not simply a sandbox for network research any more, it has become the primary means of electronic communication around the world. Before long, it will be the only such means and we will all be better for it. Please allow firms that depend on networking to invest efficiently so as to maximize their incentives to innovate.

We all gain from advances in internet technology. Thank you, and I look forward to your questions.

The Video

You can see the hearing on YouTube. It’s a shame we didn’t do this years ago; please bear it in mind next time you hear someone describing net neutrality as an “all packets are equal” rule.

The post From the Core to the Edge: Perspective on Internet Prioritization appeared first on High Tech Forum.


Regulatory Balance Across Platforms

$
0
0

We usually focus on the tech side of tech policy here, but sometimes we have to discuss policy as well. In the midst of efforts by Facebook and other Internet platforms to escape responsibility for over-sharing personal information with political consultants, Congress is inclined to change platform regulatory status.

In response to the abuse of Section 230 loophole for user-generated content by sex trafficker Backpage, Congress has already passed – and the president has signed – the FOSTA law limiting platform immunity. Backpage collaborated with users to create unlawful ads and then claimed they were the users’ own work.

Now that Congress has shown a willingness to act, friends of the platform industry are running scared. Congress, they insist, is about to kill the goose that lays the golden eggs. If just a little accountability has this result, we have to wonder what might happen to Google and Facebook if they were regulated the way the communications platforms – ISPs – have been.

The Over-Regulated Internet

Historically, both content and communication platforms have been inappropriately regulated. Communication platforms – ISPs – struggle under regulations designed for the Bell System monopoly in 1913. While these regulations were updated in the 1934 Communications Act and the 1996 Telecommunications Act, they’re still only tangentially related to the Internet.

The infamous Title II of the Act that ruled the Internet from 2015 until the effective data of the 2017 Restoring Internet Freedom Order was intended to stimulate retail competition for local telephone service. This has nothing to do to with the way the Internet functions, of course.

One of the anomalies of ISP regulation under Title II concerns the idea that they only provide “access to the Internet” rather than membership. This trope rationalizes treating them differently from the rest of the Internet. While Internet-based services are deregulated by Section 230 of the C0mmunications Act, Title II puts Internet service in a different bucket.

Oddly, lawmakers who praise Title II claim it allowed the Internet to flourish in the dial-up era. But aren’t the ISPs that enjoyed access to dial-up lines simply accessing the Internet instead of joining it? So Title II has never had anything to do with the actual Internet, just with access to it.

The Under-Regulated Internet

All of the Internet apart from access is deregulated. This is because Section 230 relieves web sites and other Internet services of Title II telecommunications carrier obligations. Sites that take content, including comments, from users can censor as much or as little as they want.

All they have to do is take down unlawful content that’s brought to the attention of the site’s operator. Can you imagine a regulatory framework that granted such wide latitude to ISPs?

Section 230 is too permissive. The Backpage case showed that the minimal protection it provided to many injured parties – victims of IP theft, sex trafficking, and a host of other crimes – is easily circumvented. The site operator simply conspired with others to post unlawful content and played dumb when called on it.

Much to the dismay of Section 230 author Sen. Ron Wyden, the loophole was closed for one class of crimes by the FOSTA. But there’s still plenty of unlicensed entertainment and for sale on the Internet, along with other criminal conduct.

Ambiguous Infrastructure

The rationale for Section 230 was to help US companies to do business over the Internet. It is often credited with helping the the US to dominate markets for Internet search, advertising, social networks, video streaming, smart phones, retail, and cloud services. So Mission Accomplished. But it’s so permissive that the US also leads the world in IP theft and Internet attacks, so it’s sensible to say Internet-based services are under-regulated.

Internet Service providers are over-regulated in the sense that any service the sell other then indiscriminate IP forwarding is suspect. ISPs are able to optimize traffic streams by application, but offering to do this for a fee was frowned up by the Obama FCC. This capability might allow new applications to bloom, but the now-powerful Internet-based services don’t want this to happen. Hence, Internet services were over-regulated before RIFO.

Between these two categories of business (Internet Services and Internet-Based Services) there exists a large gray are of edge-based infrastructure. This includes Content Delivery Networks (Akamai), web accelerators (Cloudflare), cloud services (Amazon), and real-time networks (Webex). The operation of these alternative infrastructures – let’s call them Alternative Infrastructure – is interesting.

If we buy the Obama FCC’s description of ISPs as providing “access to the Internet”, I think we have to recognize that ISPs that have direct connections to Akamai, Cloudfare, Amazon and Webex aren’t accessing the Internet when they connect users to these services. They’re actually bypassing the Internet in favor of a bespoke connection to a specialized service. And they’re deregulated.

The Worst of Both Worlds

So the Internet consists of services regulated under two radically different frameworks: ISPs are over-regulated and everything else (including Alternative Infrastructure) is under-regulated. This disparate treatment is a modern-day consequence of historical decisions made on two sets of facts that no longer exist.

The rationale for over-regulating communications networks comes from the day when there was only one networking company in the US (for practical purposes.) But now there are dozens of major companies on this segment, and most Americans use several in the course of a day.

The under-regulation of Internet-based services echoes a day when there were thousands, if not millions of them. But this segment has become radically concentrated, to the point that it’s much more concentrated than ISPs.

And the Alternative Infrastructure didn’t even exist when the Section 230 vs. Title II distinction came about. So nobody knows what to do about it – not even its operators – so it’s tabula rasa.

Here’s a Thought

Perhaps the best path to correction of our regulatory schizophrenia begins with the Alternative Infrastructure. It has some properties of ISPs and some of the Internet-Based Services.

So a regulatory framework for Akamai and Cloudflare would fit the other two types of Internet businesses fairly well.  So how would you regulate the Internet alternatives?

Let’s begin be allowing them to create their own terms of service/use. They don’t all have to be “one size fits all” public accommodations, but it’s reasonable to insist that they apply terms of service in a consistent manner, especially in noncompetitive markets.

If they want to be anti-social  like Reddit, that’s fine as long as they don’t promote criminal conduct. And their pricing should be fairly consistent for parties that conform to their TOUs.

What do we do about access to Rights of Way, Easements, and Conduits? This is where the fun starts, but it’s not hard either. The onus on the custodians of ROWs to make them available on public, consistent, and rational conditions.

And what level of diligence to they have to employ on customer activities? I’d prefer they do a lot. If you have a site, a network, or an infrastructure that’s used chiefly for criminal activities, you should not be in business.  This condition needs a loophole for nations that ban activities we regard as lawful: each nation follows its own laws and has no obligation to cooperate with nations that have different ideas.

This notion needs work, but it’s a reasonable starting point. How would you  regulate Cloudflare?

The post Regulatory Balance Across Platforms appeared first on High Tech Forum.

Senator Markey Redesigns the Internet

$
0
0

A number of Democratic members of Congress have signed on to an amicus brief filed by Sen. Ed Markey in the Mozilla challenge to the FCC’s Restoring Internet Freedom Order. The brief focuses on two elements of broadband Internet service – DNS and caching – and gets both of them wrong.

This is disappointing because Markey is generally considered to be the Democrats’ leading expert on Internet regulation. Indeed, the brief touts Markey’s role in drafting the 1996 Telecom Act and suggests he’s an authority on the subject:

Indeed, amici have unique knowledge regarding an issue at the core of this case: whether broadband access to the Internet is properly classified as a “telecommunications service” or as an “information service,” as those terms are employed in the 1996 Act.

Let’s see if this knowledge is uniquely good or uniquely bad.

Telecommunication vs. Information Processing

The legal distinction between Title I Information Services and Title II Telecommunications Services is simple enough on the surface. The first processes information, while the second moves information between point A and point B.

There is a gray area around the use of information processessing to transfer information, and this is where Markey stumbles. His error is accepting the reasoning provided in the 2015 Open Internet Order as reasonable and truthful.

To prove its case, the RIF Order simply needs to demonstrate that broadband Internet service contains a measure of information processing beyond what’s necessary to operate a network. To prove his case, Markey needs to show that all of the information processing ISPs do in the course of providing Internet service simply facilitates the transmission of information across the Internet.

The Transmission Argument

Telecom law defines telecommunications service as: “transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received.” So three things must happen for this definition to hold true:

  1. The user must specify one of more pairs of end points;
  2. The user must select the information to be transferred;
  3. The transmission must not change the form or content of the information.

If any one of these conditions does not hold true, the transaction in question is not a telecommunications service. If, for example, I express a desire to receive a video stream from a particular location with a particular resolution and the service provider changes the resolution, this is not a telecommunication transaction because the form of the information has changed.

The Boundaries of Transmission

Similarly, if I specify a given end point from which to receive information and the service obtains the information I desire from a different end point, this is no longer telecommunication. And If I specify I want Godfather III and the service gives me Godfather II, this is a better selection but it’s not telecommunication.

The only exception would be in the second case: if the service switches end points for its own reasons, that could be judged a telecommunication  management decision. But it would need to involve considerations pertaining to the “management, control, or operation” of a genuine telecommunications service.

As a general matter, telecommunications on an IP network is nothing more that the faithful transmission of Internet Protocol datagrams from one IP address to another. The management loophole can therefore only apply to activities directly related to the “management, control, or operation” of a network’s movement of IP datagrams from point of origin to point of consumption. The loosest part of the loophole is “operation” because that includes things like accounting and billing.

The Caching Argument

Caching is a service that adds value to IP networks by dynamically moving information closer to the point of consumption. It’s not possible to do this with all kinds of information because information doesn’t necessarily exist before transmission.

I can store copies of movies all over the Internet, but I can only have a conversation between end points that correspond to the locations of the parties. Companies such as Akamai have made a business of transmitting cached content all over the Internet.

This does not make them telecommunication providers because they’re selling storage and retrieval in addition to transmission. Firms like Akamai can also make decisions on behalf of users with respect to the form of the information they deliver; they can choose among video streams encoded at various resolutions and degrees of compression. And Akamai ultimately decides what IP address your information comes from.

Why ISPs Cache Information Within Their Networks

Some ISPs operate internal caches of commonly-requested information. Markey claims ISPs perform caching for one narrow purpose:

Internet caching when so employed by the broadband service provider is not a technology that anyone would consume other than adjunct to the use of Internet access service. [Caching] technology exists for the sole purpose of improving the performance of the telecommunications service offered by companies providing broadband Internet access.

This is false. ISPs can and do perform exactly the same kind of caching that commercial caches do. One example is their operation of Netflix appliances within their network footprints that work exactly the same way that Netflix caches do in other locations.

How Information Transactions Differ From Transmission

These appliances adapt stream resolutions and reduce telecommunication expenses for both Netflix and the ISP. Some ISPs actually bundle Netflix subscriptions with their services, which obviously changes the way consumers perceive the ISP service.

When such users request Netflix programs, they don’t care about the IP addresses that the programs come from or for the content’s specific form. They’re actually requesting information by name (rather than address) to be presented in a convenient format (rather than the original format) from wherever it can be found (rather than from a fixed address.)

This is not telecommunication per the statutory definition. Transactions of this sort can only fall inside the management exception if one has presupposed, as Markey appears to do, that any activity performed by an ISP is telecommunication by definition.

The DNS Argument

IP transmission consists of moving (or copying) an IP datagram from a source IP address to a destination IP address. In order to do that, multiple service providers need to route the datagram across one of several possible paths through each of their networks to a boundary point where the networks intersect. They do this according to information they have about where the destination IP address exists in space and how best to reach it.

This function is called routing, and it can be very resource intensive. Managing, controlling, and operating the routing function falls under the telecommunication management loophole. The processes of formatting, filling, ordering, and presenting IP datagrams to the ISP are user responsibilities that fall outside the transmission function.

DNS is one of the tools offered to users that enable them to fill IP datagrams with information. DNS provides a mapping of domain names to IP addresses, but it has no clue about where IP addresses actually reside. Hence, DNS is not a routing function. In reality, DNS is a general-purpose distributed database.

What DNS is Not

DNS is not a control function either, because it has no role in determining anything about the rate at which packets are put on the network and taken off. It also has no role in provisioning resources, load balancing, monitoring network health, or making datagrams move more efficiently.

DNS only provides the user with a binary number than must be placed in an IP packet for its successful transmission. Markey even admits this (although he’s confused about the number’s format):

Without DNS, a user would have to type a series of four numbers separated by periods into his or her browser to retrieve a website – an operation which, although entirely possible, would be inconvenient.

User convenience is nice, but it’s neither a transmission function nor a management function. It’s an added-value service bundled with IP transmission in order to make the ISP service offering more attractive to users, much like free bundled Netflix or free bundled anti-virus.

DNS is Offered by Third Parties

DNS is commonly but not exclusively provided by ISPs. Each ISP runs a DNS, and they all set up access to their DNS for you automatically when their systems provide you with an IP address.

But Google, IBM, and Cloudflare also provide DNS for free to anyone who wants it. You can easily configure the TCP stacks in your devices to use these DNSes: On a Mac, System Preferences->Network->Advanced->DNS. On Windows, Network and Sharing Center->Network Connections->Properties->IPv4->DNS Manual Config.

Or you can configure your router to use one of these services. IBM’s uses IPv4 address 9.9.9.9, Google uses 8.8.8.8, and Cloudflare is 1.1.1.1. Try one out and have a ball.

DNS Has No Impact on ISP Networks

It won’t hurt your ISP if you use a third party DNS because DNS has absolutely nothing to do with the management, control, or operation of an ISP network. It’s an added value service provided for your convenience.

Companies like Google want to be your DNS provider because doing so enables them to gather more information about where you go on the Internet. And that’s why they offer it for free.

If DNS had anything to do with Internet routing – as the severely misinformed Justice Scalia opined on his dissent in Brand X and the (even more) misinformed FCC claimed in the 2015 Open Internet order – you can be sure ISPs would be fighting third party DNS tooth and nail.

So Who’s Right?

As I said above: To prove its case, the RIF Order simply needs to demonstrate that broadband Internet service contains a measure of information processing beyond what’s necessary to operate a network. To prove his case, Markey needs to show that all of the information processing ISPs do in the course of providing Internet service simply facilitates the transmission of information across the Internet.

The RIF order mentions caching and DNS. As I’ve explained, these are services that go over and beyond simple transmission. Markey argues that caching  and DNS – services provided by both ISPs and third parties – are merely management and control functions in ISP networks.

That is not in fact the case as their functions are unrelated to network management and even more unrelated to network control. They’re added-value functions that are indispensable parts of a bundle of services than includes but is not limited to transmission.

Why Are These False Claims Even Made?

It’s obvious that Sen. Markey and his colleagues aren’t just trying to win a lawsuit. If that were the goal, they would have put together a stronger argument in this amicus brief.

I suspect they’re simply making a political statement: they want to protect hapless consumers from ISP abuse, as they’ve been doing since the ’90s. But this argument is wearing thin because very few people associate the major abuses of consumer privacy, safety, or convenience with DNS, caching, or broadband service of any stripe.

In reality, the Markey amicus doesn’t describe the Internet that we use today. It addresses an entirely different system that exists only in his mind. ISP service is a combination of transmission and information processing that serves the needs of the information society. And it appears to be serving those needs pretty darned well.

The post Senator Markey Redesigns the Internet appeared first on High Tech Forum.

The Big Picture: Globalization 4.0

$
0
0

I went to Dubai this week for the annual meeting of the Global Future Councils, a project of the World Economic Forum that sets the agenda for the Davos meeting in January. The theme was “shaping a new global architecture” to ensure that the benefits of the fourth industrial revolution roll out to everyone.

These terms need some unpacking. “Global architecture” means institutions, laws, and norms that affect trade, development, and human rights; the fourth industrial revolution is broadband (especially 5G) networks, the Internet of Things, artificial intelligence, and robotics.

4IR, as it’s called, underpins the global economic system, hence Globalization 4.0 goes along with it. The current perils to global cooperation are the return of nationalism, income inequality, and environmental devastation. But this isn’t just some hippie whine-fest, it’s a serious attempt to maximize the good side of technology while reducing the bad side.

Tantalizing Intellectual Buffet

The trouble with gatherings of this sort is deciding which sessions to attend because so many great topics are on the table, from Geopolitics to Augmented Reality. Fortunately, invitees are assigned to specific councils that deal with narrower subjects.

Mine was information and entertainment, which includes personal data collection, content distribution, fake news, social networks, and intellectual property.  We distilled the subject matter down to the essentials and devised a few concrete projects that can be implemented in a year.

The projects with the best chance of success are management of personal data collection – digital privacy – and measures to reduce the impact of fake news. The former needs technical means to audit data collection similar to credit reports but with the additional capability for revising permissions after giving initial consent for data collection. This is very doable.

Fake News is a Very Hard Problem

It’s tempting to say the solution to fake news is media literacy and education, but that falls short of the mark. A key factor is understanding why people are so easily seduced by fake news and how much work we’re willing to do in order to identify it.

Facebook is, of course, the world’s number one hub for the sharing of false information. There are large Facebook groups dedicated to just about every false and misguided belief system and conspiracy theory you can name: anti-vaccination, antisemitism and other forms of racism, food myths, medical placebos, economic isolationism, and dubious forms of technology regulation.

These groups thrive because they provide validation to people whose identities are bound up with false beliefs and alternative facts. People who hold these beliefs don’t like being told they’ve been fooled, but they’re not going to escape until they realize this. There are several ideas about how to do this, all of which overlap with the deprogramming that enables people to escape from abusive cults. Just milder and more persistent.

I suspect that nudging firms like Facebook and Google toward less destructive business models is essential, but that’s obviously not going to be easy and is certainly not a short-term project.

The Globalization 4.0 Theme

The councils recognize that Globalization 3.0 has been good for most people but not for all. Extreme poverty has been radically reduced worldwide over the last 20 years, life expediencies have increased, education is more pervasive, and health is improving.

But extremist movements are on the rise in developed countries, institutions are failing, and social polarization by income and ideology is increasing. Many of these problems are local to nations, but the harder ones are global.

I think it’s fair to say that there’s a consensus among this group that China has radically destabilized international institutions. We see this in trade, banking, and aid, but it’s also a factor in technology.

International Standards are Essential to Technology Development

Those of us who develop networking technologies are acutely aware of the roles played by standards bodies such as IEEE 802, the Internet Engineering Task Force, and the ITU in ensuring interoperability. We can access the Internet from wireless devices all over the world because these standards are uniform across the globe.

It’s not even remarkable that I can engage in chats, open and close my garage door, or schedule a video recording from Dubai, it’s just a fact. This capabilities depend on a long history of technologists making agreements about the way things are going to work.

This system of global agreements falls apart when we’re unable to reach consensus. The Internet we have is fairly fragile and limited because engineers were unable to reach agreement on a better one in the Open Systems Interconnection project of the 1980s. John Day tells the tale of the failure of OSI in his book, Patterns in Network Architecture.

China is Disrupting Networking Standards

Fractured standards aren’t good, and we’re seeing more of them. China devised its own security protocol for Wi-Fi LANs in 2003, WAPI. It was alleged to be compatible with real Wi-Fi, but it wasn’t. The specification was only shared with Chinese firms, and it was mandated by the Chinese government, so it effectively shut out US firms.

In 2011, China devised its own standard for the management of  MPLS optical networks, MPLS-TP OAM, through ITU’s Study Group 15. This standard violated an agreement between the ITU study group and IETF to develop a joint standard. The ITU group was effectively controlled by China and had no real reason to exist.

SG 15 has continued to lend a stamp of approval to other China-driven standards such as Slicing Packet Networks for 5G. This is despite the international consensus that 3GPP is where 5G standards are devised. These standards advantage Chinese firms such as ZTE.

A Pattern of Parallel Institutions

It’s not surprising that a nation that constructs institutions that depart from international norms in trade, banking, and development aid would also create its own technology standards. But it’s not good to bifurcate the complex international norms and agreements that support the global economy.

Hence, it’s important to heal this rift as soon as possible. Technology should be a field where expertise wins and geopolitical manipulation is relatively silent. When the next phase of economic development depends on emerging technologies, control of technology standards has pervasive, global impact.

It’s great to have a nation with China’s resources developing technology products that can be used all over the world. This keeps US firms such as Cisco and European firms like Ericsson on their toes. But at the end of the day, users of these products need to be allowed to choose on the basis of product quality rather than nation-of-origin leverage.

With any luck, this topic will be discussed in Davos. I raised it in a session on geopolitics lead by Julie Bishop, the former Foreign Minister of Australia, and Paula Dobriansky, the Undersecretary of State for Democracy and Global Affairs in the Bush 43 administration. That’s what I’m doing in the photo.

 

The post The Big Picture: Globalization 4.0 appeared first on High Tech Forum.

Sharing Federal Spectrum by Contract

$
0
0

News reports led me to believe the administration had unveiled a national spectrum plan on October 25. Alas, the White House statement is simply a plan to make a plan, not an actual strategy. But the outlines suggest we’ll see progress over the current approach to sharing spectrum.

There will be a new spectrum strategy because there has to be one if our network-based economy is going to continue growing. Not only does the US need to deploy 5G at a speedier pace than the rest of the world, we need to keep feeding the public’s appetite for mobile apps.

The hard thing about developing strategies that serve the needs of today’s markets and technologies is finding an approach that won’t put itself out of business in short order. We’re focused on 5G right now, but much of the nation will be on 4G for quite a while; and some day, we’ll have 6G.  We’re also not 100% sure how 5G is going to work. And we’re not even 50% sure about what apps will dominate 5G networks.

Revising the Current Plan is Crucial

Officially, the current spectrum strategy was developed by the Obama administration through such vehicles as the 2012 PCAST report on spectrum sharing. This report, effectively dictated by Google and Microsoft, emphasizes database-driven spectrum sharing systems along the lines of the (failed) TV White Spaces system.

The PCAST report was a disappointment because it failed to consider the only approach to increasing the utility and efficiency of spectrum-based systems that has ever worked, “upgrade and repack.” Google and Microsoft are both great companies with long histories of innovation, but creating novel ways to use spectrum isn’t really in their skill set.

The PCAST approach is essentially unworkable because it concentrates too much power in the keepers of the spectrum database (firms such as Google and Microsoft), too much coordination, and no capability to direct real-time  behavior. It’s a classic Rube Goldberg contraption.

The Future of Spectrum Systems

Rather than building databases of possible networks location-by-location with complex, hierarchical permission systems controlled by government, engineers would rather build systems that naturally interact well with each other according to specific, objective, technical terms. The issues that need to be resolved are mainly about real-time coordination, and there are several ways to address them.

One approach for solving this problem was used in the Internet’s first radio-based system, the spread spectrum packet radio network deployed 42 years ago. Another system – Code-Division Multiple Access (CDMA) – applies sharing logic to process of converting bits into radio signals. LTE devices all have some form of CDMA.

Other methods include Space-Division Multiple Access (SDMA), Multi-User MIMO, and beam-forming. In years to come, we’re likely to see practical systems that use some form of angular momentum and even the phenomenon Einstein dubbed “spooky action at a distance“, quantum entanglement.

The leap from these technical developments to bureaucratically-defined spectrum access controllers goes in the wrong direction; it also doesn’t solve any real problems.

Reducing the Government Footprint

Segregating government systems that use spectrum from private ones only makes sense when the systems serve radically different purposes, and not always then. Some military systems jam spectrum commonly used for navigation or other purposes, for example.

If your application is all about rendering other systems unusable, you’re obviously not into sharing all the time. But you probably are OK with allowing civilian use when you’re not actively fighting the enemy, which is most of the time.

All it takes to enable this is system design that shifts to alternate frequencies when any band becomes unusable for any reason. Using a spectrum access system to warn users that warfare is about to commence is essentially tipping off the enemy, probably not a wise move.

Sharing by Contract

For all uses that simply move data packets around the most sensible forms of sharing are controlled by contracts: MVNO agreements, data roaming agreements, and FirstNet-style service plans that give premium access to safety-of-life applications and generic access to cat videos.

Contract terms that specify particular technologies can be changed as new technologies pass proof-of-concept. Laws and regulations specifying usage parameters serve a similar purpose, but are much harder to adapt to emerging technologies.

Every government agency that operates a spectrum-based system should be able to specify conditions under which it is willing to share with the private sector. Not all such terms are going to be reasonable at any given time, but each agency has a master; at least in theory if not always in practice.

Transition Plans

The US needs a spectrum plan that allows us to transition from a hodgepodge of federal, private sector, and unlicensed systems to a coherent mixture of networks based on common technologies. There’s no reason for an agency to operate its own LTE network when there are so many competent operators.

There’s also no reason for an agency to continue using a pre-LTE system that offers no meaningful advantage over LTE. [Note: Substitute 5G for LTE if you’re reading this after 2020.]  Hence, most agency spectrum use can probably be contracted out to the private sector.

Applications that can’t be supported by LTE and its progeny probably can be supported by a small number of alternative technologies that have commercial applications. So sharing by contract should be the default mode.

The kind of sharing that requires commercial fallback is a secondary mode, and there are other exceptions in which sharing by contract won’t work. The plan will need to specify conditions for these narrow cases.

Next time I’ll address the questions around broadband mapping and the technologies that aid it, LIDAR and HD Radar.

 

The post Sharing Federal Spectrum by Contract appeared first on High Tech Forum.

Larry Roberts was a Networking Legend

$
0
0

Internet old-timers were deeply saddened by the passing of networking pioneer Lawrence G. Roberts on December 26. Larry is the first High Tech Forum contributor to pass away: he wrote a piece on wireless billing plans for us in 2010.

Larry’s contributions to networking are unparalleled and under-appreciated. Not only did he design ARPANET, the proof-of-concept for packet switching, he enabled people all over the globe to connect to ISPs without paying long distance telephone charges through the Telenet packet switched data network.

ARPANET proved that packet switching was not only viable, but the only feasible way for people to use computers at a distance. Larry took this learning to heart by founding Telenet and providing a foundation for both public and private computer networks of all kinds. Larry not only enabled the Internet to be built, he created a technology that will outlast it.

Building ARPANET

ARPANET was built in the late ’60s to allow researchers to share expensive computers located in research labs around the country. It was created at a time when computers cost millions of dollars and telecom was also very pricey. Packet switching was the perfect solution for this problem because it was ideal for scenarios in which data volumes were low in relation to connection time.

While the telephone network is very limited in terms of data volume, it’s always active for the entire duration of a call. It takes a relatively long time to establish a telephone connection that enables a relatively small amount of data to be transferred. Computer networks need faster connections and the ability to transfer information in clumps.

Packet switching accomplishes this by making a high capacity pipe available to multiple users at the same time. As long as most users are not transferring data at the same time, it’s great; and that’s the common scenario.

While packet switching was initially invented by Paul Baran at RAND in the early ’60s, most of his work was classified so the idea had to be recreated by Donald Davies in the UK before Larry could learn about it.

The Legacy of ARPANET

ARPANET lasted 20 years, a good run for a technology advance. It had performance limitations reflecting the limited memory available on the computers of its era. When the typical computer only has dozens of kilobytes of memory, it’s easy to overrun it with incoming data. As computers grew more powerful, networking researchers were able to extend the ARPANET paradigm to make better use of new capabilities.

A lot of that early research was done by people who worked at one time for BBN, the firm Larry selected to build ARPANET. BBN was a strange choice in some ways because its primary field was acoustics. But when nobody has ever built a packet switched network, everyone is equally qualified.

Two BBN engineers – Alex McKenzie and Dave Walden – worked with Louis Pouzin‘s CYCLADES team in France to build the network that was the paradigm for TCP/IP, the Internet’s foundation protocols. CYCLADES engineer Gérard Le Lann was a member of Vint Cerf’s team at Stanford that created the TCP/IP design, which also owed a lot to INWG 96, primarily created by McKenzie.

The beauty of the Internet, of course, is its ability to make use of advances in the speed and reliability of telecommunications networks. Unlike common networks, the Internet is virtual rather than physical. Hence, it makes use of telecommunication networks of various types even though it’s nothing but software, specifications, and agreements in its own right.

The Importance of Telenet

As we explained in our Amicus Brief in the current challenge to the deregulation of Internet access, Larry founded Telenet in 1972, long before the Internet was designed. This history is explained in a paper by Larry and colleagues, The History of Telenet and the Commercialization of Packet Switching in the U.S.

Early online services ran on centralized computers accessed by users spread across the nation and even the world. Few could afford to pay long distance tolls based on connection time, especially when data volume was low. By cutting communication charges to a tenth of long distance rates, Telenet enabled online information services of all kinds viable.

Its first customer for this kind of service was The Source, the founders of which went on to create America Online. AOL joined Barry Shein‘s The World, another user of packet data networks, in opening the fledgling Internet up to ordinary people who weren’t working for universities, pursuing advanced degrees in computer science, or doing government-sponsored research.

While it’s common to assume that the dialup ISPs relied on the telephone network for user connectivity, the role of telecom was limited to connecting local calls and providing high-capacity leased lines for the Internet backbone. In reality, Telenet (and similar firms such as Tymnet) made these businesses work.

Roberts was Right on Internet Policy

Paul Baran’s initial packet switching design was a voice network, essentially a hardened, limited access telephone network. While some of the most interesting uses of the Internet involve voice, such as Tom Evslin’s ITXC and Jeff Pulver’s Vonage, the Internet doesn’t handle voice as well as it could.

This always frustrated Larry because he wanted packet switching to handle all of the world’s communication. Many people still maintain hard line phones even today because they feel they’re more reliable than the Internet. And in some ways they’re right.

Today’s Internet billing plans hide a multitude of complications. While consumers want flat rate pricing, the costs of providing Internet service depend, to a large extent, on usage. Harmonizing diverse uses on a common network is an ongoing research topic, and it probably always will be.

Creating a network that can be all things to all people was a monumental undertaking. Making it work for every user in the most reliable, safe, and economical way is even harder. I happily shared the Amicus Brief with Larry last October that was influenced so heavily by his work on Telenet; and I was glad that it pleased him. His last message to me was a thumb’s up emoji.

There’s never going to be another Larry Roberts. Enjoy this video of a talk he gave at the Computer History Museum courtesy of ISOC.

 

The post Larry Roberts was a Networking Legend appeared first on High Tech Forum.

Viewing all 44 articles
Browse latest View live


Latest Images