Categories
Presentations

keynote “Sharing Network Data: Bright Gray Days Ahead” given at Passive and Active Measurement Conference

I’m honored to have been invited to give the keynote talk “Sharing Network Data: Bright Gray Days Ahead” at the Passive and  Active Measurement Conference 2014 in Marina del Rey.

A copy of the talk slides are at http://www.isi.edu/~johnh/PAPERS/Heidemann14b (pdf)

some brighter alternatives
Some alternatives, perhaps brighter than the gray of standard anonymization.

From the talk’s abstract:

Sharing data is what we expect as a community. From the IMC best paper award requiring a public dataset to NSF data management plans, we know that data is crucial to reproducible science. Yet privacy concerns today make data acquisition difficult and sharing harder still. AOL and Netflix have released anonymized datasets that leaked customer information, at least for a few customers and with some effort. With the EU suggesting that IP addresses are personally identifiable information, are we doomed to IP-address free “Internet” datasets?
In this talk I will explore the issues in data sharing, suggesting that we need to move beyond black and white definitions of private and public datasets, to embrace the gray shades of data sharing in our future. Gray need not be gloomy. I will discuss some new ideas in sharing that suggest that, if we move beyond “anonymous ftp” as our definition, the future may be gray but bright.

This talk did not generate new datasets, but it grows out of our experiences distributing data through several research projects (such as LANDER and LACREND, both part of the DHS PREDICT program) mentioned in the talk with data available http://www.isi.edu/ant/traces/.  This talk represents my on opinions, not those of these projects or their sponsors.

Categories
Papers Publications

new conference paper “Mapping the Expansion of Google’s Serving Infrastructure” in IMC 2013 and WSJ Blog

The paper “Mapping the Expansion of Google’s Serving Infrastructure” (by Matt Calder, Xun Fan, Zi Hu, Ethan Katz-Bassett, John Heidemann and Ramesh Govindan) will appear in the 2013 ACM Internet Measurements Conference (IMC) in Barcelona, Spain in Oct. 2013.

This work was also featured today in Digits, the technology news and analysis blog from the Wall Street Journal, and at USC’s press room.

A copy of the paper is available at http://www.isi.edu/~johnh/PAPERS/Calder13a, and data from the work is available at http://mappinggoogle.cs.usc.edu, from http://www.isi.edu/ant/traces/mapping_google/index.html, and from http://www.predict.org.

[Calder13a] figure 5a
Growth of Google’s infrastructure, measured in IP addresses [Calder13a] figure 5a

From the paper’s abstract:

Modern content-distribution networks both provide bulk content and act as “serving infrastructure” for web services in order to reduce user-perceived latency. Serving infrastructures such as Google’s are now critical to the online economy, making it imperative to understand their size, geographic distribution, and growth strategies. To this end, we develop techniques that enumerate IP addresses of servers in these infrastructures, find their geographic location, and identify the association between clients and clusters of servers. While general techniques for server enumeration and geolocation can exhibit large error, our techniques exploit the design and mechanisms of serving infrastructure to improve accuracy. We use the EDNS-client-subnet DNS extension to measure which clients a service maps to which of its serving sites. We devise a novel technique that uses this mapping to geolocate servers by combining noisy information about client locations with speed-of-light constraints. We demonstrate that this technique substantially improves geolocation accuracy relative to existing approaches. We also cluster server IP addresses into physical sites by measuring RTTs and adapting the cluster thresholds dynamically. Google’s serving infrastructure has grown dramatically in the ten months, and we use our methods to chart its growth and understand its content serving strategy. We find that the number of Google serving sites has increased more than sevenfold, and most of the growth has occurred by placing servers in large and small ISPs across the world, not by expanding Google’s backbone.

Categories
Papers Publications

new conference paper “Replay of Malicious Traffic in Network Testbeds” in IEEE Conf. on Technologies for Homeland Security (HST)

The paper “Replay of Malicious Traffic in Network Testbeds” (by Alefiya Hussain, Yuri Pradkin, and John Heidemann) will appear in the 3th IEEE Conference on Technologies for Homeland Security (HST) in Waltham, Mass. in Nov. 2013.  The paper is available at  http://www.isi.edu/~johnh/PAPERS/Hussain13a.

Hussain13a_iconFrom the paper’s abstract:

In this paper we present tools and methods to integrate attack measurements from the Internet with controlled experimentation on a network testbed. We show that this approach provides greater fidelity than synthetic models. We compare the statistical properties of real-world attacks with synthetically generated constant bit rate attacks on the testbed. Our results indicate that trace replay provides fine time-scale details that may be absent in constant bit rate attacks. Additionally, we demonstrate the effectiveness of our approach to study new and emerging attacks. We replay an Internet attack captured by the LANDER system on the DETERLab testbed within two hours.

Data from the paper is available as DoS_DNS_amplification-20130617 from the authors or http://www.predict.org, and the tools are at deterlab).

Categories
Papers Publications

new conference paper “Trinocular: Understanding Internet Reliability Through Adaptive Probing” in SIGCOMM 2013

The paper “Trinocular: Understanding Internet Reliability Through Adaptive Probing” was accepted by SIGCOMM’13 in Hong Kong, China (available at http://www.isi.edu/~johnh/PAPERS/Quan13c with cite and pdf, or direct pdf).

100% detection of outages one round or longer
100% detection of outages one round or longer (figure 3 from the paper)

From the abstract:

Natural and human factors cause Internet outages—from big events like Hurricane Sandy in 2012 and the Egyptian Internet shutdown in Jan. 2011 to small outages every day that go unpublicized. We describe Trinocular, an outage detection system that uses active probing to understand reliability of edge networks. Trinocular is principled: deriving a simple model of the Internet that captures the information pertinent to outages, and populating that model through long-term data, and learning current network state through ICMP probes. It is parsimonious, using Bayesian inference to determine how many probes are needed. On average, each Trinocular instance sends fewer than 20 probes per hour to each /24 network block under study, increasing Internet “background radiation” by less than 0.7%. Trinocular is also predictable and precise: we provide known precision in outage timing and duration. Probing in rounds of 11 minutes, we detect 100% of outages one round or longer, and estimate outage duration within one-half round. Since we require little traffic, a single machine can track 3.4M /24 IPv4 blocks, all of the Internet currently suitable for analysis. We show that our approach is significantly more accurate than the best current methods, with about one-third fewer false conclusions, and about 30% greater coverage at constant accuracy. We validate our approach using controlled experiments, use Trinocular to analyze two days of Internet outages observed from three sites, and re-analyze three years of existing data to develop trends for the Internet.

Citation: Lin Quan, John Heidemann and Yuri Pradkin. Trinocular: Understanding Internet Reliability Through Adaptive Probing. In Proceedings of the ACM SIGCOMM Conference. Hong Kong, China, ACM. August, 2013. <http://www.isi.edu/~johnh/PAPERS/Quan13c>.

Datasets (listed here) used in generating this paper are available or will be available before the conference presentation.

Categories
Papers Publications

New conference paper “Evaluating Anycast in the Domain Name System” to appear at INFOCOM

The paper “Evaluating Anycast in the Domain Name System” (available at http://www.isi.edu/~xunfan/research/Fan13a.pdf) was accepted to appear at the IEEE International Conference (INFOCOM) on Computer Communications 2013 in Turin, Italy.

Fan13a_icon
Recall as number of vantage points vary. [Fan13a, figure 2]
From the abstract:

IP anycast is a central part of production DNS. While prior work has explored proximity, affinity and load balancing for some anycast services, there has been little attention to third-party discovery and enumeration of components of an anycast service. Enumeration can reveal abnormal service configurations, benign masquerading or hostile hijacking of anycast services, and help characterize anycast deployment. In this paper, we discuss two methods to identify and characterize anycast nodes. The first uses an existing anycast diagnosis method based on CHAOS-class DNS records but augments it with traceroute to resolve ambiguities. The second proposes Internet-class DNS records which permit accurate discovery through the use of existing recursive DNS infrastructure. We validate these two methods against three widely-used anycast DNS services, using a very large number (60k and 300k) of vantage points, and show that they can provide excellent precision and recall. Finally, we use these methods to evaluate anycast deployments in top-level domains (TLDs), and find one case where a third-party operates a server masquerading as a root DNS anycast node as well as a noticeable proportion of unusual DNS proxies. We also show that, across all TLDs, up to 72% use anycast.

Citation: Xun Fan, John Heidemann and Ramesh Govindan. Evaluating Anycast in the Domain Name System. To appear in Proceedings of the IEEE International Conference on Computer Communications (INFOCOM). Turin, Italy. April, 2013. http://www.isi.edu/~johnh/PAPERS/Fan13a.html

Categories
Presentations

New Poster “Poster Abstract: Towards Active Measurements of Edge Network Outages” in PAM 2013

Lin Quan presented our outage work: “Poster Abstract: Towards Active Measurements of Edge Network Outages” at the PAM 2013 conference. Poster abstract is available at http://www.isi.edu/~johnh/PAPERS/Quan13a/index.html

pam_poster

End-to-end reachability is a fundamental service of the Internet. We study network outages caused by natural disasters, and political upheavals. We propose a new approach to outage detection using active probing. Like prior outage detection methods, our method uses ICMP echo requests (“pings”) to detect outages, but we probe with greater density and ner granularity, showing pings can detect outages without supplemental probing. The main contribution of our work is to de ne how to interpret pings as outages: defi ning an outage as a sharp change in block responsiveness relative to recent behavior. We also provide preliminary analysis of outage rate in the Internet edge. Space constrains this poster abstract to only sketches of our approach; details and validation are in our technical report. Our data is available at no charge, see http://www.isi.edu/ant/traces/internet_outages/.

This work is based on our technical report: http://www.isi.edu/~johnh/PAPERS/Quan12a/index.html, joint work by Lin Quan, John Heidemann and Yuri Pradkin.

Categories
Papers Publications

New conference paper “Detecting Encrypted Botnet Traffic” at Global Internet 2013

The paper “Detecting Encrypted Botnet Traffic” was accepted by Global Internet 2013 in Turin, Italy (available at http://www.netsec.colostate.edu/~zhang/DetectingEncryptedBotnetTraffic.pdf)

From the abstract:

Bot detection methods that rely on deep packet in- spection (DPI) can be foiled by encryption. Encryption, however, increases entropy. This paper investigates whether adding high- entropy detectors to an existing bot detection tool that uses DPI can restore some of the bot visibility. We present two high-entropy classifiers, and use one of them to enhance BotHunter. Our results show that while BotHunter misses about 50% of the bots when they employ encryption, our high-entropy classifier restores most of its ability to detect bots, even when they use encryption.

This work is advised by Christos Papadopolous and Dan Massey at Colorado State University.

Categories
Presentations

New Talk “A Fresh Look At Scalable Forwarding Through Router FIB Caching”

Kaustubh Gadkari gave a talk on “A Fresh Look At Scalable Forwarding Through Router FIB Caching” at NANOG57 in Orlando, FL. Slides for the talk are available in pptx or pdf.

Kaustubh Gadkari at Nanog57This talk presented current research into the possibility of employing caching on router FIBs to reduce the amount of FIB memory required to forward packets. Our analysis shows that 99%+ packets can be forwarded from the cache with a cache size of 10,000 entries. Packets that caused cache misses were TCP SYNs and SYNACKs; no data packets were queued. Our analysis also shows that our caching system is robust against attacks against the cache.

This work is part of our ongoing work on the analysis of FIB caching, being advised by Christos Papadopolous and Dan Massey at Colorado State University.

Categories
Presentations

new talk “Active Probing of Edge Networks: Outages During Hurricane Sandy” at NANOG57

John Heidemann gave the talk “Active Probing of Edge Networks: Outages During Hurricane Sandy” at NANOG57 in Orlando Florida on Feb. 5, 2013 as part of a panel on Hurricane Sandy, hosted by James Cowie at Renesys.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann13b.html.

m2051752.small

This talk summarizes our analysis of outages in edge networks at the time of Hurricane Sandy. This analysis showed U.S. networks had double the outage rate (from 0.2% to 0.4%) on 2012-10-30, the day after Sandy landfall, and recovered after four days. The talk was part of the panel “Internet Impacts of Hurricane Sandy”, moderated by James Cowie, with presentations by John Heidemann, USC/Information Sciences Institute; Emile Aben, RIPE NCC; Patrick Gilmore, Akamai; Doug Madory, Renesys.

This work is based on our recent technical report   “A Preliminary Analysis of Network Outages During Hurricane Sandy“, joint work of John Heidemann, Lin Quan, and Yuri Pradkin.

 

 

Categories
Papers Publications

New conference paper “Towards Geolocation of Millions of IP Addresses” at IMC 2012

The paper “Towards Geolocation of Millions of IP Addresses” was accepted by IMC 2012 in Boston, MA (available at http://www.isi.edu/~johnh/PAPERS/Hu12a.html).

From the abstract:

Previous measurement-based IP geolocation algorithms have focused on accuracy, studying a few targets with increasingly sophisticated algorithms taking measurements from tens of vantage points (VPs). In this paper, we study how to scale up existing measurement-based geolocation algorithms like Shortest Ping and CBG to cover the whole Internet. We show that with many vantage points, VP proximity to the target is the most important factor affecting accuracy. This observation suggests our new algorithm that selects the best few VPs for each target from many candidates. This approach addresses the main bottleneck to geolocation scalability: minimizing traffic into each target (and also out of each VP) while maintaining accuracy. Using this approach we have currently geolocated about 35% of the allocated, unicast, IPv4 address-space (about 85% of the addresses in the Internet that can be directly geolocated). We visualize our geolocation results on a web-based address-space browser.

Citation: Zi Hu and John Heidemann and Yuri Pradkin. Towards Geolocation of Millions of IP Addresses. In Proceedings of the ACM Internet Measurement Conference, p. to appear. Boston, MA, USA, ACM. 2012. <http://www.isi.edu/~johnh/PAPERS/Hu12a.html>