Categories
Presentations

new talk “DNS Privacy, Service Management, and Research: Friends or Foes” at the NDSS DNS Privacy Workshop 2017

John Heidemann gave the talk “DNS Privacy, Service Management, and Research: Friends or Foes” at the NDSS DNS Privacy Workshop in San Diego, California, USA on Feburary 26, 2017.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann17a.pdf.
The talk does not have a formal abstract, but to summarize:

A slide from the [Heidemann17a] talk, looking at what different DNS stakeholders may want.
A slide from the [Heidemann17a] talk, looking at what different DNS stakeholders may want.

This invited talk is part of a panel on the tension between DNS privacy and service management.  In the talk I expand on that topic and discuss
the tension between DNS privacy, service management, and research.
I give suggestions about how service management and research can adapt to proceed while still providing basic privacy.

Although not discussed in the talks, we distribute some DNS datasets,  available at https://ant.isi.edu/datasets/ and at https://impactcybertrust.org.  We also provide dnsanon, a tool to anonymize DNS queries.

Categories
Presentations

new talk “Distributed Denial-of-Service: What Datasets Can Help?” at ACSAC 2016

John Heidemann gave the talk “Distributed Denial-of-Service: What Datasets Can Help?” at ACSAC 2016 in Universal City, California, USA on December 7, 2016.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann16d.pdf.

heidemann16d_iconFrom the abstract:

Distributed Denial-of-Service attacks are continuing threat to the Internet. Meeting this threat requires new approaches that will emerge from new research, but new research requires the support of dataset and experimental methods. This talk describes four different aspects of research on DDoS, privacy and security, and the datasets that have generated to support that research. Areas we consider are detecting low rate DDoS attacks, understanding the effects of DDoS on DNS infrastructure, evolving the DNS protocol to prevent DDoS and improve privacy, and ideas about experimental testbeds to evaluate new ideas in DDoS defense for DNS. Datasets described in this talk are available at no cost from the author and through the IMPACT Program.

This talk is based on the work with many prior collaborators: Terry Benzel, Wes Hardaker, Christian Hessleman, Zi Hu, Allison Mainkin, Urbashi Mitra, Giovane Moura, Moritz Müller, Ricardo de O. Schmidt, Nikita Somaiya, Gautam Thatte, Wouter de Vries, Lan Wei, Duane Wessels, Liang Zhu.

Datasets from the paper are available at https://ant.isi.edu/datasets/ and at https://impactcybertrust.org.

Categories
Presentations

new talk “Anycast vs. DDoS: Evaluating Nov. 30” at DNS-OARC

John Heidemann gave the talk “Anycast vs. DDoS: Evaluating Nov. 30” at DNS-OARC in Dallas, Texas, USA on October 16, 2016.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann16c.pdf.

 

Possible outcomes of anycast under stress, a slide from a talk about the Nov. 30, 2015 Root DNS event.
Possible outcomes of anycast under stress, a slide from a talk about the Nov. 30, 2015 Root DNS event.

From the abstract:

Distributed Denial-of-Service (DDoS) attacks continue to be a major threat in the Internet today. DDoS attacks overwhelm target services with requests or other “bogus” traffic, causing requests from legitimate users to be shut out. A common defense against DDoS is to replicate the service in multiple physical locations or sites. If all sites announce a common IP address, BGP will associate users around the Internet with a nearby site, defining the catchment of that site. Anycast adds resilience against DDoS both by increasing capacity to the aggregate of many sites, and allowing each catchment to contain attack traffic leaving other sites unaffected. IP anycast is widely used for commercial CDNs and essential infrastructure such as DNS, but there is little evaluation of anycast under stress.

This talk will provide a first evaluation of several anycast services under stress with public data. Our subject is the Internet’s Root Domain Name Service, made up of 13 independently designed services (“letters”, 11 with IP anycast) running at more than 500 sites. Many of these services were stressed by sustained traffic at 100x normal load on Nov. 30 and Dec. 1, 2015. We use public data for most of our analysis to examine how different services respond to the these events. In our analysis we identify two policies by operators: (1) sites may absorb attack traffic, containing the damage but reducing service to some users, or (2) they may withdraw routes to shift both legitimate and bogus traffic to other sites. We study how these deployment policies result in different levels of service to different users, during and immediately after the attacks.

We also show evidence of collateral damage on other services located near the attack targets. The work is based on analysis of DNS response from around 9000 RIPE Atlas vantage points (or “probes”), agumented by RSSAC-002 reports from 5 root letters and BGP data from BGPmon. We examine DNS performance for each Root Letter, for anycast sites inside specific letters, and for specific servers at one site.

This talk is based on the work in the paper “Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event” at appear at  IMC 2016, by Giovane C. M. Moura, Ricardo de O. Schmidt, John Heidemann, Wouter B. de Vries, Moritz Müller,  Lan Wei, and Christian Hesselman.

Datasets from the paper are available at https://ant.isi.edu/datasets/anycast/

Categories
Presentations

new talk “Anycast Latency: How Many Sites are Enough?” at DNS-OARC

John Heidemann gave the talk “Anycast Latency: How Many Sites are Enough?” at DNS-OARC in Dallas, Texas, USA on October 16, 2016.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann16b.pdf.

Comparing actual (obtained) anycast latency against optimal possible anycast latency, for 4 different anycast deployments (each a Root Letter). From the talk [Heidemann16b], based on data from [Moura16b].
Comparing actual (obtained) anycast latency against optimal possible anycast latency, for 4 different anycast deployments (each a Root Letter). From the talk [Heidemann16b], based on data from [Moura16b].
From the abstract:

This talk will evaluate anycast latency. An anycast service uses multiple sites to provide high availability, capacity and redundancy, with BGP routing associating users to nearby anycast sites. Routing defines the catchment of the users that each site serves. Although prior work has studied how users associate with anycast services informally, in this paper we examine the key question how many anycast sites are needed to provide good latency, and the worst case latencies that specific deployments see. To answer this question, we must first define the optimal performance that is possible, then explore how routing, specific anycast policies, and site location affect performance. We develop a new method capable of determining optimal performance and use it to study four real-world anycast services operated by different organizations: C-, F-, K-, and L-Root, each part of the Root DNS service. We measure their performance from more than worldwide vantage points (VPs) in RIPE Atlas. (Given the VPs uneven geographic distribution, we evaluate and control for potential bias.) Key results of our study are to show that a few sites can provide performance nearly as good as many, and that geographic location and good connectivity have a far stronger effect on latency than having many nodes. We show how often users see the closest anycast site, and how strongly routing policy affects site selection.

This talk is based on the work in the technical report “Anycast Latency: How Many Sites Are Enough?” (ISI-TR-2016-708), by Ricardo de O. Schmidt, John Heidemann, and Jan Harm Kuipers.

Datasets from the paper are available at https://ant.isi.edu/datasets/anycast/

Categories
Presentations

new talk “New Opportunities for Research and Experiments in Internet Naming And Identification” at the AIMS Workshop

John Heidemann gave the talk “New Opportunities for Research and Experiments in Internet Naming And Identification” at the AIMS 2016 workshop at CAIDA, La Jolla, California on February 11, 2016.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann16a.pdf.

Needs for new naming and identity research prompt new research infrastructure, enabling new research directions.
Needs for new naming and identity research prompt new research infrastructure, enabling new research directions.

From the abstract:

DNS is central to Internet use today, yet research on DNS today is challenging: many researchers find it challenging to create realistic experiments at scale and representative of the large installed base, and datasets are often short (two days or less) or otherwise limited. Yes DNS evolution presses on: improvements to privacy are needed, and extensions like DANE provide an opportunity for DNS to improve security and support identity management. We exploring how to grow the research community and enable meaningful work on Internet naming. In this talk we will propose new research infrastructure to support to realistic DNS experiments and longitudinal data studies. We are looking for feedback on our proposed approaches and input about your pressing research problems in Internet naming and identification.

For more information see our project website.

Categories
Presentations

new animation: the August 2014 Time Warner outage

Global network outages on 2014-08-27 during the Time Warner event in the U.S.
Global network outages on 2014-08-27 during the Time Warner event in the U.S.

On August 27, 2014, Time Warner suffered a network outage that affected about 11 million customers for more than two hours (making national news). We have observing global network outages since December 2013, including this outage.

We recently animated this August Time Warner outage.

We see that the Time Warner outage lasted about two hours and affected a good swath of the United States. We caution that all large network operators have occasional outages–this animation is not intended to complain about Time Warner, but to illustrate the need to have tools that can detect and visualize national-level outages.  It also puts the outage into context: we can see a few other outages in Uruguay, Brazil, and Saudi Arabia.

This analysis uses dataset usc-lander /internet_outage_adaptive_a17all-20140701, available for research use from PREDICT, or by request from us if PREDICT access is not possible.

This animation was first shown at the Dec. 2014 DHS Cyber Security Division R&D Showcase and Technical Workshop as part of the talk “Towards Understanding Internet Reliability” given by John Heidemann. This work was supported by DHS, most recently through the LACREND project.

Categories
Presentations

new animation: a sample of U.S. networks, before and after Hurricane Sandy

In October 2012, Hurricane Sandy made landfall on the U.S. East Coast causing widespread power outages. We were able to see the effects of Hurricane Sandy by analyzing active probing of the Internet. We first reported this work in a technical report and then with more refined analysis in a peer-reviewed paper.

Network outages for a sample of U.s. East Coast networks on the day after Hurricane Sandy made landfall.
Network outages for a sample of U.s. East Coast networks on the day after Hurricane Sandy made landfall.

We recently animated our data showing Hurricane Sandy landfall.

These 4 days before landfall and 7 after show some intersting results: On the day of landfall we see about three-times the number of outages relative to “typical” U.S. networks. Finally, we see it takes about four days to recover back to typical conditions.

This analysis uses dataset usc-lander / internet_address_survey_reprobing_it50j, available for research use from PREDICT, or by request from us if PREDICT access is not possible.

This animation was first shown at the Dec. 2014 DHS Cyber Security Division R&D Showcase and Technical Workshop as part of the talk “Towards Understanding Internet Reliability” given by John Heidemann. This work was supported by DHS, most recently through the LACREND project.

Categories
Presentations

new animation: eight years of Internet IPv4 Censuses

We’ve been taking Internet IPv4 censuses regularly since 2006.  In each census, we probe the entire allocated IPv4 address space.  You may browse 8 years of data at our IPv4 address browser.

A still image from our animation of 8 years of IPv4 censuses.
A still image from our animation of 8 years of IPv4 censuses.

We recently put together an animation showing 8 years of IPv4 censuses, from 2006 through 2014.

These eight years show some interesting events, from an early “open” Internet in 2006, to full allocation of IPv4 by ICANN in 2011, to higher utilization in 2014.

All data shown here can be browsed at our website.
Data is available for research use from PREDICT or by request from us if PREDICT access is not possible.

This animation was first shown at the Dec. 2014 DHS Cyber Security Division R&D Showcase and Technical Workshop as part of the talk “Towards Understanding Internet Reliability” given by John Heidemann.  This work was supported by DHS, most recently through the LACREND project.

Categories
Presentations

new talk “Internet Populations (Good and Bad): Measurement, Estimation, and Correlation” at the ICERM Workshop on Cybersecurity

John Heidemann gave the talk “Internet Populations (Good and Bad): Measurement, Estimation, and Correlation” at the ICERM Workshop on Cybersecurity at Brown University, Providence, Rhode Island on October 22, 2014. Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann14e/.

Can we improve the mathematical tools we use to measure and understand the Internet?
Can we improve the mathematical tools we use to measure and understand the Internet?

From the abstract:

Our research studies the Internet’s public face. Since 2006 we have been taking censuses of the Internet address space (pinging all IPv4 addresses) every 3 months. Since 2012 we have studied network outages and events like Hurricane Sandy, using probes of much of the Internet every 11 minutes. Most recently we have evaluated the diurnal Internet, finding countries where most people turn off their computers at night. Finally, we have looked at network reputation, identifying how spam generation correlates with network location, and others have studies multiple measurements of “network reputation”.

A common theme across this work is one must estimate characteristics of the edge of the Internet in spite of noisy measurements and a underlying changes. One also need to compare and correlate these imperfect measurements with other factors (from GDP to telecommunications policies).

How do these applications relate to the mathematics of census taking and measurement, estimation, and correlation? Are there tools we should be using that we aren’t? Do the properties of the Internet suggest new approaches (for example where rapid full enumeration is possible)? Does correlation and estimates of network “badness” help us improve cybersecurity by treating different parts of the network differently?

Categories
Presentations

new animation “Watching the Internet Sleep”

Does the Internet sleep? Yes, and we have the video!

We have recently put together a video showing 35 days of Internet address usage as observed from Trinocular, our outage detection system.

The Internet sleeps: address use in South America is low (blue) in the early morning, while India is high (red) in afternoon.
The Internet sleeps: address use in South America is low (blue) in the early morning, while India is high (red) in afternoon.

The Internet sleeps: address use in South America is low (blue) in the early morning, while India is high (red) in afternoon.  When we look at address usage over time, we see that some parts of the globe have daily swings of +/-10% to 20% in the number of active addresses. In China, India, eastern Europe and much of South America, the Internet sleeps.

Understanding when the Internet sleeps is important to understand how different country’s network policies affect use, it is part of outage detection, and it is a piece of improving our long-term goal of understanding exactly how big the Internet is.

See http://www.isi.edu/ant/diurnal/ for the video, or read our technical paper “When the Internet Sleeps: Correlating Diurnal Networks With External Factors” by Quan, Heidemann, and Pradkin, to appear at ACM IMC, Nov. 2014.

Datasets (listed here) used in generating this video are available.

This work is partly supported by DHS S&T, Cyber Security division, agreement FA8750-12-2-0344 (under AFRL) and N66001-13-C-3001 (under SPAWAR).  The views contained
herein are those of the authors and do not necessarily represent those of DHS or the U.S. Government.  This work was classified by USC’s IRB as non-human subjects research (IIR00001648).