Categories
Presentations

new talk “T-DNS: Connection-Oriented DNS to Improve Privacy and Security” given at DNS-OARC

John Heidemann gave the talk “T-DNS: Connection-Oriented DNS to Improve Privacy and Security” given at the Spring DNS-OARC meeting in Warsaw, Poland on May 10, 2014.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann14c.html.

don't fear connections for DNS
don’t fear connections for DNS

From the abstract:

This talk will discuss connection-oriented DNS to improve DNS security and privacy. DNS is the canonical example of a connectionless, single packet, request/response protocol, with UDP as its dominant transport. Yet DNS today is challenged by eavesdropping that compromises privacy, source-address spoofing that results in denial-of-service (DoS) attacks on the server and third parties, injection attacks that exploit fragmentation, and size limitations that constrain policy and operational choices. We propose t-DNS to address these problems: it uses TCP to smoothly support large payloads and mitigate spoofing and amplification for DoS. T-DNS uses transport-layer security (TLS) to provide privacy from users to their DNS resolvers and optionally to authoritative servers.

Traditional wisdom is that connection setup will balloon latency for clients and overwhelm servers. We provide data to show that these assumptions are overblown–our model of end-to-end latency shows TLS to the recursive resolver is only about 5-24% slower, with UDP to the authoritative server. End-to-end latency is 19-33% slower with TLS to recursive and TCP to authoritative. Experiments behind these models show that after connection establishment, TCP and TLS latency is equivalent to UDP. Using diverse trace data we show that frequent connection reuse is possible (60-95% for stub and recursive resolvers, although half that for authoritative servers). With conservative timeouts (20 s at authoritative servers and 60 s elsewhere) we show that : a large recursive resolver may have 25k active connections consuming about 9 GB of RAM. These results depend on specific design and implementation decisions–query pipelining, out-of-order responses, TLS connection resumption, and plausible timeouts.

We hope to solicit feedback from the OARC community about this work to understand design and operational concerns if T-DNS deployment was widespread. The work in the talk is by Liang Zhu, Zi Hu, and John Heidemann (all of USC/ISI), Duane Wessels and Allison Mankin (both of Verisign), and Nikita Somaiya (USC/ISI).

A technical report describing the work is at http://www.isi.edu/ johnh/PAPERS/Zhu14a.pdf and the protocol changes are described ashttp://datatracker.ietf.org/doc/draft-hzhwm-start-tls-for-dns/.

Categories
Publications Technical Report

new technical report “T-DNS: Connection-Oriented DNS to Improve Privacy and Security”

We released a new technical report “T-DNS: Connection-Oriented DNS to Improve Privacy and Security”, ISI-TR-2014-688, available as http://www.isi.edu/~johnh/PAPERS/Zhu14a.pdf

 

From the abstract:sim_hit_server_median_all

This paper explores connection-oriented DNS to improve DNS security and privacy. DNS is the canonical example of a connectionless, single packet, request/response protocol, with UDP as its dominant transport. Yet DNS today is challenged by eavesdropping that compromises privacy, source-address spoofing that results in denial-of-service (DoS) attacks on the server and third parties, injection attacks that exploit fragmentation, and size limitations that constrain policy and operational choices. We propose t-DNS to address these problems: it combines TCP to smoothly support large payloads and mitigate spoofing and amplification for DoS. T-DNS uses transport-layer security (TLS) to provide privacy from users to their DNS resolvers and optionally to authoritative servers. Traditional wisdom is that connection setup will balloon latency for clients and overwhelm servers. These are myths—our model of end-to-end latency shows TLS to the recursive resolver is only about 21% slower, with UDP to the authoritative server. End-to-end latency is 90% slower with TLS to recursive and TCP to authoritative. Experiments behind these models show that after connection establishment, TCP and TLS latency is equivalent to UDP. Using diverse trace data we show that frequent connection reuse is possible (60–95% for stub and recursive resolvers, although half that for authoritative servers). With conservative timeouts (20 s at authoritative servers and 60 s elsewhere) we show that server memory requirements match current hardware: a large recursive resolver may have 25k active connections consuming about 9 GB of RAM. We identify the key design and implementation decisions needed to minimize overhead—query pipelining, out-of-order responses, TLS connection resumption, and plausible timeouts.

 

Categories
Papers Publications

new conference paper “Replay of Malicious Traffic in Network Testbeds” in IEEE Conf. on Technologies for Homeland Security (HST)

The paper “Replay of Malicious Traffic in Network Testbeds” (by Alefiya Hussain, Yuri Pradkin, and John Heidemann) will appear in the 3th IEEE Conference on Technologies for Homeland Security (HST) in Waltham, Mass. in Nov. 2013.  The paper is available at  http://www.isi.edu/~johnh/PAPERS/Hussain13a.

Hussain13a_iconFrom the paper’s abstract:

In this paper we present tools and methods to integrate attack measurements from the Internet with controlled experimentation on a network testbed. We show that this approach provides greater fidelity than synthetic models. We compare the statistical properties of real-world attacks with synthetically generated constant bit rate attacks on the testbed. Our results indicate that trace replay provides fine time-scale details that may be absent in constant bit rate attacks. Additionally, we demonstrate the effectiveness of our approach to study new and emerging attacks. We replay an Internet attack captured by the LANDER system on the DETERLab testbed within two hours.

Data from the paper is available as DoS_DNS_amplification-20130617 from the authors or http://www.predict.org, and the tools are at deterlab).

Categories
Papers Publications

New conference paper “Evaluating Anycast in the Domain Name System” to appear at INFOCOM

The paper “Evaluating Anycast in the Domain Name System” (available at http://www.isi.edu/~xunfan/research/Fan13a.pdf) was accepted to appear at the IEEE International Conference (INFOCOM) on Computer Communications 2013 in Turin, Italy.

Fan13a_icon
Recall as number of vantage points vary. [Fan13a, figure 2]
From the abstract:

IP anycast is a central part of production DNS. While prior work has explored proximity, affinity and load balancing for some anycast services, there has been little attention to third-party discovery and enumeration of components of an anycast service. Enumeration can reveal abnormal service configurations, benign masquerading or hostile hijacking of anycast services, and help characterize anycast deployment. In this paper, we discuss two methods to identify and characterize anycast nodes. The first uses an existing anycast diagnosis method based on CHAOS-class DNS records but augments it with traceroute to resolve ambiguities. The second proposes Internet-class DNS records which permit accurate discovery through the use of existing recursive DNS infrastructure. We validate these two methods against three widely-used anycast DNS services, using a very large number (60k and 300k) of vantage points, and show that they can provide excellent precision and recall. Finally, we use these methods to evaluate anycast deployments in top-level domains (TLDs), and find one case where a third-party operates a server masquerading as a root DNS anycast node as well as a noticeable proportion of unusual DNS proxies. We also show that, across all TLDs, up to 72% use anycast.

Citation: Xun Fan, John Heidemann and Ramesh Govindan. Evaluating Anycast in the Domain Name System. To appear in Proceedings of the IEEE International Conference on Computer Communications (INFOCOM). Turin, Italy. April, 2013. http://www.isi.edu/~johnh/PAPERS/Fan13a.html

Categories
Papers Publications

New conference paper “Detecting Encrypted Botnet Traffic” at Global Internet 2013

The paper “Detecting Encrypted Botnet Traffic” was accepted by Global Internet 2013 in Turin, Italy (available at http://www.netsec.colostate.edu/~zhang/DetectingEncryptedBotnetTraffic.pdf)

From the abstract:

Bot detection methods that rely on deep packet in- spection (DPI) can be foiled by encryption. Encryption, however, increases entropy. This paper investigates whether adding high- entropy detectors to an existing bot detection tool that uses DPI can restore some of the bot visibility. We present two high-entropy classifiers, and use one of them to enhance BotHunter. Our results show that while BotHunter misses about 50% of the bots when they employ encryption, our high-entropy classifier restores most of its ability to detect bots, even when they use encryption.

This work is advised by Christos Papadopolous and Dan Massey at Colorado State University.

Categories
Publications Technical Report

New tech report “Characterizing Anycast in the Domain Name System”

We just published an new technical report of our anycast enumeration work, including some exciting new results. Check out “Characterizing Anycast in the Domain Name System” (available at ftp://ftp.isi.edu/isi-pubs/tr-681.pdf) .

From the abstract:

IP anycast is a central part of production DNS. While prior
work has explored proximity, affinity and load balancing
for some anycast services, there has been little attention to
third-party discovery and enumeration of components of an
anycast service. Enumeration can reveal abnormal service
configurations, benign masquerading or hostile hijacking of
anycast services, and can help characterize the extent of any-
cast deployment. In this paper, we discuss two methods to
identify and characterize anycast nodes. The first uses an
existing anycast diagnosis method based on CHAOS-class
DNS records but augments it with traceroute to resolve
ambiguities. The second proposes Internet-class DNS records
which permit accurate discovery through the use of existing
recursive DNS infrastructure. We validate these two meth-
ods against three widely-used anycast DNS services, using
a very large number (60k and 300k) of vantage points, and
show that they can provide excellent precision and recall.
Finally, we use these methods to evaluate anycast deploy-
ments in top-level domains (TLDs), and find one case where
a third-party operates a server masquerading as a root DNS
anycast node as well as a noticeable proportion of unusual
anycast proxies. We also show that, across all TLDs, up to
72% use anycast, and that, of about 30 anycast providers,
the two largest serve nearly half the anycasted TLD name-
servers.

Citation: Xun Fan, John Heidemann and Ramesh Govindan. Characterizing Anycast in the Domain Name System. Technical Report N. ISI-TR-681, USC/Information Sciences Institute, May, 2012. ftp://ftp.isi.edu/isi-pubs/tr-681.pdf

Categories
Publications Technical Report

New tech report “Identifying and Characterizing Anycast in the Domain Name System”

We just published a new technical report “Identifying and Characterizing Anycast in the Domain Name System” (available at ftp://ftp.isi.edu/isi-pubs/tr-671.pdf) .

From the abstract:

Since its first appearance, IP anycast has become essential
for critical network services such as the Domain Name Sys-
tem (DNS). Despite this, there has been little attention to
independently identifying and characterizing anycast nodes.
External evaluation of anycast allows both third-party audit-
ing of its benefits, and is essential to discovering benign mas-
querading or hostile hijacking of anycast services. In this
paper, we develop ACE, an approach to identify and charac-
terize anycast nodes. ACE first method is DNS queries for
CHAOS records, the recommended debugging service for
anycast, suitable for cooperative anycast services. Its second
method uses traceroute to identify all anycast services by
their connectivity to the Internet. Each individual method
has ambiguities in some circumstances; we show a com-
bined method improves on both. We validate ACE against
two widely used anycast DNS services that provide ground
truth. ACE has good precision, with 88% of its results corre-
sponding to unique anycast nodes of the F-root DNS service.
Its recall is affected by the number and diversity of vantage
points. We use ACE for an initial study of how anycast is
used for top-level domain servers. We find one case where
a third-party server operates on root-DNS IP address, mas-
querades to capture traffic for its organization. We also study
the 1164 nameserver IP addresses used by all generic and
country-code top-level domains in April 2011. This study
shows evidence that at least 14% and perhaps 32% use any-
cast.

Citation: Xun Fan, John Heidemann and Ramesh Govindan. Identifying and Characterizing Anycast in the Domain Name System. Technical Report N. ISI-TR-671, USC/Information Sciences Institute, June, 2011. ftp://ftp.isi.edu/isi-pubs/tr-671.pdf

Data from this paper will be available from PREDICT through the LANDER project; contact the authors for details.

Categories
Papers Publications

new conference paper “Low-Rate, Flow-Level Periodicity Detection” at Global Internet 2011

Visualization of low-rate periodicity, before and after installation of a keylogger.  [Bartlett11a, figure 3]
Visualization of low-rate periodicity, before and after installation of a keylogger. [Bartlett11a, figure 3]
The paper “Low-Rate, Flow-Level Periodicity Detection”, by Genevieve Bartlett, John Heidemann, and Christos Papadopoulos is being presented at IEEE Global Internet 2011 in Shanghai, China this week. The full text is available at http://www.isi.edu/~johnh/PAPERS/Bartlett11a.pdf.

The abstract summarizes the work:

As desktops and servers become more complicated, they employ an increasing amount of automatic, non-user initiated communication. Such communication can be good (OS updates, RSS feed readers, and mail polling), bad (keyloggers, spyware, and botnet command-and-control), or ugly (adware or unauthorized peer-to-peer applications). Communication in these applications is often regular, but with very long periods, ranging from minutes to hours. This infrequent communication and the complexity of today’s systems makes these applications difficult for users to detect and diagnose. In this paper we present a new approach to identify low-rate periodic network traffic and changes in such regular communication. We employ signal-processing techniques, using discrete wavelets implemented as a fully decomposed, iterated filter bank. This approach not only detects low-rate periodicities, but also identifies approximate times when traffic changed. We implement a self-surveillance application that externally identifies changes to a user’s machine, such as interruption of periodic software updates, or an installation of a keylogger.

The datasets used in this paper are available on request, and through PREDICT.

An expanded version of the paper is available as a technical report “Using low-rate flow periodicities in anomaly detection” by Bartlett, Heidemann and Papadopoulos. Technical Report ISI-TR-661, USC/Information Sciences Institute, Jul 2009. http://www.isi.edu/~johnh/PAPERS/Bartlett09a.pdf

Categories
Papers Publications

New journal paper “Parametric Methods for Anomaly Detection in Aggregate Traffic” to appear in TON

The paper “Parametric Methods for Anomaly Detection in Aggregate Traffic” was accepted for publication in ACM/IEEE Transactions on Networking (available at http://www.isi.edu/~johnh/PAPERS/Thatte10a.html).

From the abstract:

This paper develops parametric methods to detect network anomalies using only aggregate traffic statistics, in contrast to other works requiring flow separation, even when the anomaly is a small fraction of the total traffic. By adopting simple statistical models for anomalous and background traffic in the time-domain, one can estimate model parameters in realtime, thus obviating the need for a long training phase or manual parameter tuning. The proposed bivariate Parametric Detection Mechanism (bPDM) uses a sequential probability ratio test, allowing for control over the false positive rate while examining the trade-off between detection time and the strength of an anomaly. Additionally, it uses both traffic-rate and packet-size statistics, yielding a bivariate model that eliminates most false positives. The method is analyzed using the bitrate SNR metric, which is shown to be an effective metric for anomaly detection. The performance of the bPDM is evaluated in three ways: first, synthetically-generated traffic provides for a controlled comparison of detection time as a function of the anomalous level of traffic. Second, the approach is shown to be able to detect controlled artificial attacks over the USC campus network in varying real traffic mixes. Third, the proposed algorithm achieves rapid detection of real denial-of-service attacks as determined by the replay of previously captured network traces. The method developed in this paper is able to detect all attacks in these scenarios in a few seconds or less.

Citation: Gautam Thatte, Urbashi Mitra, and John Heidemann. Parametric Methods for Anomaly Detection in Aggregate Traffic. ACM/IEEE Transactions on Networking, p. accepted to appear, August, 2010. (Likely publication in 2011). <http://www.isi.edu/~johnh/PAPERS/Thatte10a.html>.
Categories
Papers Publications

New conference paper “Correlating Spam Activity with IP Address Characteristics” at Global Internet

The paper “Correlating Spam Activity with IP Address Characteristics” (available at PDF Format) was accepted and presented at Global Internet 2010. The focus of this paper is to quantify the collateral damage (legitimate mail servers incorrectly blacklisted) caused by the practice of blocking /24 address blocks based on the presence of spammers. The paper also revisits the differences in IP address characteristics and domain names between spammers and non-spammers.

From the abstract:

It is well known that spam bots mostly utilize compromised machines with certain address characteristics, such as dynamically allocated addresses, machines in specific geographic areas and IP ranges from AS’ with more tolerant spam policies. Such machines tend to be less diligently administered and may exhibit less stability, more volatility, and shorter uptimes. However, few studies have attempted to quantify how such spam bot address characteristics compare with non-spamming hosts. Quantifying these characteristics may help provide important information for comprehensive spam mitigation.

We use two large datasets, namely a commercial blacklist and an Internet-wide address visibility study to quantify address characteristics of spam and non-spam networks. We find that spam networks exhibit significantly less availability and uptime, and higher volatility than non-spam networks. In addition, we conduct a collateral damage study of a common practice where an ISP blocks the entire /24 prefix if spammers are detected in that range.  We find that such a policy blacklists a significant portion of legitimate mail servers belonging to the same prefix.

Citation: Chris Wilcox, Christos Papadopoulos, John Heidemann. Correlating Spam Activity with IP Address Characteristics.  Proceedings of the IEEE Global Internet Conference, San Diego, CA, USA, IEEE.  March, 2010.