Categories
Presentations

new talk “Anycast Latency: How Many Sites are Enough?” at DNS-OARC

John Heidemann gave the talk “Anycast Latency: How Many Sites are Enough?” at DNS-OARC in Dallas, Texas, USA on October 16, 2016.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann16b.pdf.

Comparing actual (obtained) anycast latency against optimal possible anycast latency, for 4 different anycast deployments (each a Root Letter). From the talk [Heidemann16b], based on data from [Moura16b].
Comparing actual (obtained) anycast latency against optimal possible anycast latency, for 4 different anycast deployments (each a Root Letter). From the talk [Heidemann16b], based on data from [Moura16b].
From the abstract:

This talk will evaluate anycast latency. An anycast service uses multiple sites to provide high availability, capacity and redundancy, with BGP routing associating users to nearby anycast sites. Routing defines the catchment of the users that each site serves. Although prior work has studied how users associate with anycast services informally, in this paper we examine the key question how many anycast sites are needed to provide good latency, and the worst case latencies that specific deployments see. To answer this question, we must first define the optimal performance that is possible, then explore how routing, specific anycast policies, and site location affect performance. We develop a new method capable of determining optimal performance and use it to study four real-world anycast services operated by different organizations: C-, F-, K-, and L-Root, each part of the Root DNS service. We measure their performance from more than worldwide vantage points (VPs) in RIPE Atlas. (Given the VPs uneven geographic distribution, we evaluate and control for potential bias.) Key results of our study are to show that a few sites can provide performance nearly as good as many, and that geographic location and good connectivity have a far stronger effect on latency than having many nodes. We show how often users see the closest anycast site, and how strongly routing policy affects site selection.

This talk is based on the work in the technical report “Anycast Latency: How Many Sites Are Enough?” (ISI-TR-2016-708), by Ricardo de O. Schmidt, John Heidemann, and Jan Harm Kuipers.

Datasets from the paper are available at https://ant.isi.edu/datasets/anycast/

Categories
Papers Publications

new conference paper “Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event” in IMC 2016

The paper “Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event” will appear at ACM Internet Measurement Conference in November 2016 in Santa Monica, California, USA. (available at http://www.isi.edu/~weilan/PAPER/IMC2016camera.pdf)

From the abstract:

RIPE Atlas VPs going to different anycast sites when under stress. Colors indicate different sites, with black showing unsuccessful queries. [Moura16b, figure 11b]

Distributed Denial-of-Service (DDoS) attacks continue to be a major threat in the Internet today. DDoS attacks overwhelm target services with requests or other traffic, causing requests from legitimate users to be shut out. A common defense against DDoS is to replicate the service in multiple physical locations or sites. If all sites announce a common IP address, BGP will associate users around the Internet with a nearby site,defining the catchment of that site. Anycast addresses DDoS both by increasing capacity to the aggregate of many sites, and allowing each catchment to contain attack traffic leaving other sites unaffected. IP anycast is widely used for commercial CDNs and essential infrastructure such as DNS, but there is little evaluation of anycast under stress. This paper provides the first evaluation of several anycast services under stress with public data. Our subject is the Internet’s Root Domain Name Service, made up of 13 independently designed services (“letters”, 11 with IP anycast) running at more than 500 sites. Many of these services were stressed by sustained traffic at 100 times normal load on Nov.30 and Dec.1, 2015. We use public data for most of our analysis to examine how different services respond to the these events. We see how different anycast deployments respond to stress, and identify two policies: sites may absorb attack traffic, containing the damage but reducing service to some users, or they may withdraw routes to shift both good and bad traffic to other sites. We study how these deployments policies result in different levels of service to different users. We also show evidence of collateral damage on other services located near the attacks.

This IMC paper is joint work of  Giovane C. M. Moura, Moritz Müller, Cristian Hesselman (SIDN Labs), Ricardo de O. Schmidt, Wouter B. de Vries (U. Twente), John Heidemann, Lan Wei (USC/ISI). Datasets in this paper are derived from RIPE Atlas and are available at http://traces.simpleweb.org/ and at https://ant.isi.edu/datasets/anycast/.

Categories
Software releases

timefind v1.0.3 released with recursion support

timefind v1.0.3 has been released (available at https://ant.isi.edu/software/timefind/).

indexer and timefind will handle the indexing and selection of multiple network data types given some time range.

Major changes in 1.0.3 include:

  • new file processors for .csv, .fsdb, syslog, and BGP/MRT files
  • timefind and indexer now support traversing the file hierarchy with recursive processing
  • index entries now have a “last modified” column timestamp: existing entries will be reindexed if that file was modified after index creation.

Many thanks to Paul Ferrell (LANL) and Paige Hanson (LANL) for their contributions in timefind extensions.

Categories
Software releases

new software dnsanon_rssac

We have released version 1.3 of dnsanon_rssac on 2016-06-13, a tool that processes DNS data seen in packet captures (typcally pcap format) to generate RSSAC-002 statistics reports.

Our tool is at https://ant.isi.edu/software/dnsanon_rssac/index.html, with a description at
https://ant.isi.edu/software/dnsanon_rssac/README.html .  Our tool builds on dnsanon.

The main goal of our implementation is that partial processing can be done independently and then merged. Merging works both for files captured at different times of the day, or at different anycast sites.

Our software stack has run at B-Root since February 2016, and since May 2016 in production use.

To our knowledge, this tool is the first to implement the RSSAC-002v3 specification.

 

Categories
Announcements In-the-news

new RFC “Specification for DNS over Transport Layer Security (TLS)”

The Internet RFC-7858, “Specification for DNS over Transport Layer Security (TLS)”, was just released by the ITEF as a Standards Track document.

From the abstract:

This document describes the use of Transport Layer Security (TLS) to provide privacy for DNS. Encryption provided by TLS eliminates opportunities for eavesdropping and on-path tampering with DNS queries in the network, such as discussed in RFC 7626. In addition, this document specifies two usage profiles for DNS over TLS and provides advice on performance considerations to minimize overhead from using TCP and TLS with DNS.

This document focuses on securing stub-to-recursive traffic, as per
the charter of the DPRIVE Working Group. It does not prevent future applications of the protocol to recursive-to-authoritative traffic.

This RFC is joint work of Zhi Hu, Liang Zhu, John Heidemann, Allison Mankin, Duane Wessels, and Paul Hoffman, of USC/ISI, Verisign, ICANN, and independent (at different times).  This RFC is one result of our prior paper “Connection-Oriented DNS to Improve Privacy and Security”, but also represents the input of the DPRIVE IETF working group (Warren Kumari and Tim Wicinski, chairs), where it is one of a set of RFCs designed to improve DNS privacy.

On to deployments!

Categories
Publications Technical Report

new technical report “Anycast Latency: How Many Sites Are Enough?”

We have released a new technical report “Anycast Latency: How Many Sites Are Enough?”, ISI-TR-2016-708, available at http://www.isi.edu/%7ejohnh/PAPERS/Schmidt16a.pdf.

[Schmidt16a] figure 4: distribution of measured latency (solid lines) to optimal possible latency (dashed lines) for 4 Root DNS anycast deployments.
[Schmidt16a] figure 4: distribution of measured latency (solid lines) to optimal possible latency (dashed lines) for 4 Root DNS anycast deployments.
From the abstract:

Anycast is widely used today to provide important services including naming and content, with DNS and Content Delivery Networks (CDNs). An anycast service uses multiple sites to provide high availability, capacity and redundancy, with BGP routing associating users to nearby anycast sites. Routing defines the catchment of the users that each site serves. Although prior work has studied how users associate with anycast services informally, in this paper we examine the key question how many anycast sites are needed to provide good latency, and the worst case latencies that specific deployments see. To answer this question, we must first define the optimal performance that is possible, then explore how routing, specific anycast policies, and site location affect performance. We develop a new method capable of determining optimal performance and use it to study four real-world anycast services operated by different organizations: C-, F-, K-, and L-Root, each part of the Root DNS service. We measure their performance from more than worldwide vantage points (VPs) in RIPE Atlas. (Given the VPs uneven geographic distribution, we evaluate and control for potential bias.) Key results of our study are to show that a few sites can provide performance nearly as good as many, and that geographic location and good connectivity have a far stronger effect on latency than having many nodes. We show how often users see the closest anycast site, and how strongly routing policy affects site selection.

This technical report is joint work of Ricardo de O. Schmidt and Jan Harm Kuipers (U. Twente) and John Heidemann (USC/ISI).  Datasets in this paper are derived from RIPE Atlas and are available at http://traces.simpleweb.org/.

 

Categories
Papers Publications

new workshop paper “Assessing Co-Locality of IP Blocks” in GI 2016

The paper “Assessing Co-Locality of IP Blocks” appeared in the 19th IEEE  Global Internet Symposium on April 11, 2016 in San Francisco, CA, USA and is available at (http://www.cs.colostate.edu/~manafgh/publications/Assessing-Co-Locality-of-IP-Block-GI2016.pdf). The datasets are available at (https://ant.isi.edu/datasets/geolocation/).

From the abstract:

isi_all_blocks_clustersCountMany IP Geolocation services and applications assume that all IP addresses within the same /24 IPv4 prefix (a /24 block) reside in close physical proximity. For blocks that contain addresses in very different locations (such as blocks identifying network backbones), this assumption can result in a large geolocation error. In this paper we evaluate the co-location assumption. We first develop and validate a hierarchical clustering method to find clusters of IP addresses with similar observed delay measurements within /24 blocks. We validate our methodology against two ground-truth datasets, confirming that 93% of the identified multi-cluster blocks are true positives with multiple physical locations and an upper bound for false positives of only about 5.4%. We then apply our methodology to a large dataset of 1.41M /24 blocks extracted from a delay-measurement study of the entire responsive IPv4 address space. We find that about 247K (17%) out of 1.41M blocks are not co-located, thus quantifying the error in the /24 block co-location assumption.

The work in this paper is by Manaf Gharaibeh, Han Zhang, Christos Papadopoulos (Colorado State University) and John Heidemann (USC/ISI).

Categories
Papers Publications

new workshop paper “BotDigger: Detecting DGA Bots in a Single Network” in TMA 2016

The paper “BotDigger: Detecting DGA Bots in a Single Network” has appeared at the TMA Workshop on April 8, 2016 in Louvain La Neuve, Belgium (available at http://www.cs.colostate.edu/~hanzhang/papers/BotDigger-TMA16.pdf).

The code of BotDigger is available on GitHub at: https://github.com/hanzhang0116/BotDigger

From the abstract:

To improve the resiliency of communication between bots and C&C servers, bot masters began utilizing Domain Generation Algorithms (DGA) in recent years. Many systems have been introduced to detect DGA-based botnets. However, they suffer from several limitations, such as requiring DNS traffic collected across many networks, the presence of multiple bots from the same botnet, and so forth. These limitations mBotDiggerOverviewake it very hard to detect individual bots when using traffic collected from a single network. In this paper, we introduce BotDigger, a system that detects DGA-based bots using DNS traffic without a priori knowledge of the domain generation algorithm. BotDigger utilizes a chain of evidence, including quantity, temporal and linguistic evidence to detect an individual bot by only monitoring traffic at the DNS servers of a single network. We evaluate BotDigger’s performance using traces from two DGA-based botnets: Kraken and Conflicker. Our results show that BotDigger detects all the Kraken bots and 99.8% of Conficker bots. A one-week DNS trace captured from our university and three traces collected from our research lab are used to evaluate false positives. The results show that the false positive rates are 0.05% and 0.39% for these two groups of background traces, respectively.

The work in this paper is by Han Zhang, Manaf Gharaibeh, Spiros Thanasoulas, and Christos Papadopoulos (Colorado State University).

Categories
Papers Publications

new workshop paper “AuntieTuna: Personalized Content-based Phishing Detection” in USEC 2016

The paper “AuntieTuna: Personalized Content-based Phishing Detection” will appear at the NDSS Usable Security Workshop on February 21, 2016 in San Diego, CA, USA (available at https://www.isi.edu/~calvin/papers/Ardi16a.pdf).

From the abstract:

Implementation diagram of the AuntieTuna anti-phishing plugin.Phishing sites masquerade as copies of legitimate sites (“targets”) to fool people into sharing sensitive information that can then be used for fraud. Current phishing defenses can be ineffective, with training ignored, blacklists of discovered, bad sites too slow to pick up new threats, and whitelists of known-good sites too limiting. We have developed a new technique that automatically builds personalized lists of target sites (candidates that may be copied by phish) and then tests sites as a user browses them. Our approach uses cryptographic hashing of each page’s rendered Document Object Model (DOM), providing a zero false positive rate and identifying more than half of detectable phish in a controlled study. Since each user develops a customized list of target sites, our approach presents a diverse defense against phishers. We have prototyped our approach as a Chrome browser plugin called AuntieTuna, emphasizing usability through automated and simple manual addition of target sites and clean reports of potential phish that include context about the targeted site. AuntieTuna does not slow web browsing time and presents alerts on phishing pages before users can divulge information. Our plugin is open-source and has been in use by a few users for months.

The work in this paper is by Calvin Ardi (USC/ISI) and John Heidemann (USC/ISI).

Categories
Publications Technical Report

new technical report “BotDigger: Detecting DGA Bots in a Single Network”

We have released a new technical report “BotDigger: Detecting DGA Bots in a Single Network”, CS-16-101, available at http://www.cs.colostate.edu/~hanzhang/papers/BotDigger-techReport.pdf

The code of BotDigger is available on GitHub at: https://github.com/hanzhang0116/BotDigger

From the abstract:

To improve the resiliency of communication between bots and C&C servers, bot masters began utilizing Domain Generation Algorithms (DGA) in recent years. Many systems have been introduced to detect DGA-based botnets. However, they suffer from several limitations, such as requiring DNS traffic collected across many networks, the presence of multiple bots from the same botnet, and so forth. BotDiggerOverviewThese limitations make it very hard to detect individual bots when using traffic collected from a single network. In this paper, we introduce BotDigger, a system that detects DGA-based bots using DNS traffic without a priori knowledge of the domain generation algorithm. BotDigger utilizes a chain of evidence, including quantity, temporal and linguistic evidence
to detect an individual bot by only monitoring traffic at the DNS servers of a single network. We evaluate BotDigger’s performance using traces from two DGA-based botnets: Kraken and Conflicker. Our results show that BotDigger detects all the Kraken bots and 99.8% of Conficker bots. A one-week DNS trace captured from our university and three traces collected from our research lab are used to evaluate false positives. The results show that the false positive rates are 0.05% and 0.39% for these two groups of background traces, respectively.

This work is by Han Zhang, Manaf Gharaibeh, Spiros Thanasoulas and Christos Papadopoulos (Colorado State University).