Categories
Presentations

keynote “Sharing Network Data: Bright Gray Days Ahead” given at Passive and Active Measurement Conference

I’m honored to have been invited to give the keynote talk “Sharing Network Data: Bright Gray Days Ahead” at the Passive and  Active Measurement Conference 2014 in Marina del Rey.

A copy of the talk slides are at http://www.isi.edu/~johnh/PAPERS/Heidemann14b (pdf)

some brighter alternatives
Some alternatives, perhaps brighter than the gray of standard anonymization.

From the talk’s abstract:

Sharing data is what we expect as a community. From the IMC best paper award requiring a public dataset to NSF data management plans, we know that data is crucial to reproducible science. Yet privacy concerns today make data acquisition difficult and sharing harder still. AOL and Netflix have released anonymized datasets that leaked customer information, at least for a few customers and with some effort. With the EU suggesting that IP addresses are personally identifiable information, are we doomed to IP-address free “Internet” datasets?
In this talk I will explore the issues in data sharing, suggesting that we need to move beyond black and white definitions of private and public datasets, to embrace the gray shades of data sharing in our future. Gray need not be gloomy. I will discuss some new ideas in sharing that suggest that, if we move beyond “anonymous ftp” as our definition, the future may be gray but bright.

This talk did not generate new datasets, but it grows out of our experiences distributing data through several research projects (such as LANDER and LACREND, both part of the DHS PREDICT program) mentioned in the talk with data available http://www.isi.edu/ant/traces/.  This talk represents my on opinions, not those of these projects or their sponsors.

Categories
Students

congratulations to Lin Quan for his new PhD

I would like to congratulate Dr. Lin Quan for defending his PhD in Dec. 2013 and his doctoral disseration “Learning about the Internet through Efficient Sampling and Aggregation” in Jan. 2014.

Lin Quan (left) and John Heidemann, after Lin's PhD defense.
Lin Quan (left) and John Heidemann, after Lin’s PhD defense.

From the abstract:

The Internet is important for nearly all aspects of our society, affecting ordinary people, businesses, and social activities. Because of its importance and wide-spread applications, we want to have good knowledge about Internet’s operation, reliability and performance, through various kinds of measurements. However, despite the wide usage, we only have limited knowledge of its overall performance and reliability. The first reason of this limited knowledge is that there is no central governance of the Internet, making both active and passive measurements hard. The second reason is the huge scale of the Internet. This makes brute-force analysis hard because of practical computing resource limits such as CPU, memory and probe rate.

This thesis states that sampling and aggregation are necessary to overcome resource constraints in time and space to learn about better knowledge of the Internet. Many other Internet measurement studies also utilize sampling and aggregation techniques to discover properties of the Internet. We distinguish our work by exploring novel mechanisms and new knowledge in several specific areas. First, we aggregate short-time-scale observations and use an efficient multi-time-scale query scheme to discover the properties and reasons of long-lived Internet flows. Second, we sample and probe /24 blocks in the IPv4 address space, and use greedy clustering algorithms to efficiently characterize Internet outages. Third, we show an efficient and effective aggregation technique by visualization and clustering. This technique makes both manual inspection and automated characterization easier. Last, we develop an adaptive probing system to study global scale Internet reliability. It samples and adapts probe rate within each /24 block for accurate beliefs. By aggregation and correlation to other domains, we are also able to study broader policy effects on Internet use, such as political causes, economic conditions, and access technologies.

This thesis provides several examples of Internet knowledge discovery with new mechanisms of sampling and aggregation techniques. We believe our approaches of new sampling and aggregation mechanisms can be used by and will inspire new ways for future Internet measurement systems to overcome resource constraints, such as large amount and dispersed data.

 

Categories
Social

ANT research group lunch

In early December we had a ANT research group lunch to celebrate recent PhD defenses (Xue Cai and Lin Quan) and graduates (Chengjie Zhang).  As a special treat alumnae Alefiya Hussain and and Genevieve Bartlett joined us.  A yummy lunch and a great occasion!

ANT Project members, Dec. 2013
ANT Project members, Dec. 2013
Categories
Students

congratulations to Xue Cai for her new PhD

I would like to congratulate Dr. Xue Cai for defending her PhD and filing her doctoral disseration “Global Analysis and Modeling on Decentralized Internet” in Dec. 2013.

Xue Cai (left) and John Heidemann, after her PhD defense.
Xue Cai (left) and John Heidemann, after her PhD defense.

From the abstract:

Better understanding about Internet infrastructure is crucial to improve the reliability, performance, and security of web services. The need for this understanding then drives research in network measurements. Internet measurements explore a variety of data related to a specific topic and then develop approaches to transform data into useful understanding about the topic. This process is not straightforward since available data often only contains indirect information that may appear to have limited connection to the topic.
This body of work asserts that systematic approaches can overcome data limitations to improve understanding about important aspects of the Internet infrastructure. We demonstrate the validity of our thesis statement by providing three specific examples that develop novel approaches and provide novel understanding compared to prior work. In particular, we employ four systematic approaches—statistical, clustering, modeling, and what-if approach—to understand three important aspects of the Internet: the efficiency and management of IPv4 addresses, the ownership of Autonomous Systems (ASes), and the robustness of web services when facing critical facility disruption. These approaches have addressed a variety of challenges posed by indirect, incomplete, over-fit, noisy and unknown data; they in turn enable us to improve understanding about the Internet.
Each of our three studies explores a different area of the problem space and opens a much larger area of opportunity. The data limitations addressed by our approaches also occur in many other problems. We believe our approaches can inspire future work to solve these problems and in turn provide more useful understanding about the Internet.

Categories
Publications Technical Report

new technical report “A Holistic Framework for Bridging Regional Threats to User QoE”

We just released a new technical report “A Holistic Framework for Bridging Regional Threats to User QoE”, ISI-TR-2013-687, available as https://www.isi.edu/~johnh/PAPERS/Cai13c.pdf

Estimated impact on user QoE in four cable cut incidents (Figure 13 from [Cai13c])

From the abstract:

Submarine cable cuts have become increasingly common, with five incidents breaking more than ten cables in the last three years. Today, around~300 cables carry the majority of international Internet traffic, so a single cable cut can affect millions of users, and repairs to any cut are expensive and time consuming. Prior work has either measured the impact following incidents, or predicted the results of network changes to relatively abstract Internet topological models. In this paper, we develop a new approach to model cable cuts. Our approach differs by following problems drawn from real-world occurrences all the way to their impact on end-users. Because our approach spans many layers, no single organization can provide all the data needed to apply the model. We therefore perform what-if analysis to study a range of possibilities. With this approach we evaluate four incidents in 2012 and 2013; our analysis suggests general rules that assess the degree of a country’s vulnerability to a cut.

 

Categories
Papers Publications

new conference paper “Mapping the Expansion of Google’s Serving Infrastructure” in IMC 2013 and WSJ Blog

The paper “Mapping the Expansion of Google’s Serving Infrastructure” (by Matt Calder, Xun Fan, Zi Hu, Ethan Katz-Bassett, John Heidemann and Ramesh Govindan) will appear in the 2013 ACM Internet Measurements Conference (IMC) in Barcelona, Spain in Oct. 2013.

This work was also featured today in Digits, the technology news and analysis blog from the Wall Street Journal, and at USC’s press room.

A copy of the paper is available at http://www.isi.edu/~johnh/PAPERS/Calder13a, and data from the work is available at http://mappinggoogle.cs.usc.edu, from http://www.isi.edu/ant/traces/mapping_google/index.html, and from http://www.predict.org.

[Calder13a] figure 5a
Growth of Google’s infrastructure, measured in IP addresses [Calder13a] figure 5a

From the paper’s abstract:

Modern content-distribution networks both provide bulk content and act as “serving infrastructure” for web services in order to reduce user-perceived latency. Serving infrastructures such as Google’s are now critical to the online economy, making it imperative to understand their size, geographic distribution, and growth strategies. To this end, we develop techniques that enumerate IP addresses of servers in these infrastructures, find their geographic location, and identify the association between clients and clusters of servers. While general techniques for server enumeration and geolocation can exhibit large error, our techniques exploit the design and mechanisms of serving infrastructure to improve accuracy. We use the EDNS-client-subnet DNS extension to measure which clients a service maps to which of its serving sites. We devise a novel technique that uses this mapping to geolocate servers by combining noisy information about client locations with speed-of-light constraints. We demonstrate that this technique substantially improves geolocation accuracy relative to existing approaches. We also cluster server IP addresses into physical sites by measuring RTTs and adapting the cluster thresholds dynamically. Google’s serving infrastructure has grown dramatically in the ten months, and we use our methods to chart its growth and understand its content serving strategy. We find that the number of Google serving sites has increased more than sevenfold, and most of the growth has occurred by placing servers in large and small ISPs across the world, not by expanding Google’s backbone.

Categories
Papers Publications

new conference paper “Replay of Malicious Traffic in Network Testbeds” in IEEE Conf. on Technologies for Homeland Security (HST)

The paper “Replay of Malicious Traffic in Network Testbeds” (by Alefiya Hussain, Yuri Pradkin, and John Heidemann) will appear in the 3th IEEE Conference on Technologies for Homeland Security (HST) in Waltham, Mass. in Nov. 2013.  The paper is available at  http://www.isi.edu/~johnh/PAPERS/Hussain13a.

Hussain13a_iconFrom the paper’s abstract:

In this paper we present tools and methods to integrate attack measurements from the Internet with controlled experimentation on a network testbed. We show that this approach provides greater fidelity than synthetic models. We compare the statistical properties of real-world attacks with synthetically generated constant bit rate attacks on the testbed. Our results indicate that trace replay provides fine time-scale details that may be absent in constant bit rate attacks. Additionally, we demonstrate the effectiveness of our approach to study new and emerging attacks. We replay an Internet attack captured by the LANDER system on the DETERLab testbed within two hours.

Data from the paper is available as DoS_DNS_amplification-20130617 from the authors or http://www.predict.org, and the tools are at deterlab).

Categories
Publications Technical Report

new technical report “Mapping the Expansion of Google’s Serving Infrastructure”

We just released a new technical report “Mapping the Expansion of Google’s Serving Infrastructure”, available as https://www.isi.edu/~johnh/PAPERS/Calder13a.pdf

Growth of Google's serving network.
Growth of Google’s serving network (measured here in IP addresses).

From the abstract:

Modern content-distribution networks both provide bulk content and act as “serving infrastructure” for web services in order to reduce user-perceived latency. These serving infrastructures (such as Google’s) are now critical to the online economy, making it imperative to understand their size, geographic distribution, and growth strategies. To this end, we develop techniques that enumerate servers in these infrastructures, find their geographic location, and identify the association between clients and servers. While general techniques for server enumeration and geolocation can exhibit large error, our techniques exploit the design and mechanisms of serving infrastructure to improve accuracy. We use the EDNS-client-subnet extension to DNS to measure which clients a service maps to which of its servers. We devise a novel technique that uses this mapping to geolocate servers by combining noisy information about client locations with speed-of-light constraints. We demonstrate that this technique substantially improves geolocation accurate relative to existing approaches. We also cluster servers into physical sites by measuring RTTs and adapting the cluster thresholds dynamically. Google’s serving infrastructure has grown dramatically in the last six months, and we use our methods to chart its growth and understand its content serving strategy. We find that Google has almost doubled in size, and that most of the growth has occurred by placing servers in large and small ISPs across the world, not by expanding on Google’s backbone.

Datasets from this work will be available, please contact the authors at this time if you’re interested.

Categories
Software releases

Software to Generate IP Hitlists with Hadoop Now Available

We are happy to release the set of map/reduce processing scripts that run in Hadoop to consume our Internet address censuses and output hitlists, as described in the paper “Selecting Representative IP Addresses for Internet Topology Studies“.

These scripts depend on our internal Hadoop configuration and so will require some modification to work elsewhere, but we make them available and encourage feedback about their use.

Categories
Presentations

new talk “Long-term Data Collection and Analysis of Outages at the Edge” given at the AIMS workshop

John Heidemann gave the talk “Long-term Data Collection and Analysis of Outages at the Edge” at UCSD, San Diego, California on Feb. 8, 2013 as part of the CAIDA Active Internet Measurement Systems (AIMS) Workshop.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann13e.html.

talk_icon

This talk describes our analysis of outages in edge networks at the time of Hurricane Sandy, and how that work was enabled by long-term data collection. The analysis showed U.S. networks had double the outage rate (from 0.2% to 0.4%) on 2012-10-30, the day after Sandy landfall, and recovered after four days. We highlighted long-term data collection of Internet Surveys, a random sample of about 41,000 /24 blocks, and the characteristics that make that data suitable for re-analysis. The talk was part of the CAIDA Workshop on Active Internet Measurement Systems, hosted at UCSD.

This work is based on our recent technical report   “A Preliminary Analysis of Network Outages During Hurricane Sandy“, joint work of John Heidemann, Lin Quan, and Yuri Pradkin.