Categories
Presentations

new talk “Distributed Denial-of-Service: What Datasets Can Help?” at ACSAC 2016

John Heidemann gave the talk “Distributed Denial-of-Service: What Datasets Can Help?” at ACSAC 2016 in Universal City, California, USA on December 7, 2016.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann16d.pdf.

heidemann16d_iconFrom the abstract:

Distributed Denial-of-Service attacks are continuing threat to the Internet. Meeting this threat requires new approaches that will emerge from new research, but new research requires the support of dataset and experimental methods. This talk describes four different aspects of research on DDoS, privacy and security, and the datasets that have generated to support that research. Areas we consider are detecting low rate DDoS attacks, understanding the effects of DDoS on DNS infrastructure, evolving the DNS protocol to prevent DDoS and improve privacy, and ideas about experimental testbeds to evaluate new ideas in DDoS defense for DNS. Datasets described in this talk are available at no cost from the author and through the IMPACT Program.

This talk is based on the work with many prior collaborators: Terry Benzel, Wes Hardaker, Christian Hessleman, Zi Hu, Allison Mainkin, Urbashi Mitra, Giovane Moura, Moritz Müller, Ricardo de O. Schmidt, Nikita Somaiya, Gautam Thatte, Wouter de Vries, Lan Wei, Duane Wessels, Liang Zhu.

Datasets from the paper are available at https://ant.isi.edu/datasets/ and at https://impactcybertrust.org.

Categories
Papers Publications

new conference paper “Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event” in IMC 2016

The paper “Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event” will appear at ACM Internet Measurement Conference in November 2016 in Santa Monica, California, USA. (available at http://www.isi.edu/~weilan/PAPER/IMC2016camera.pdf)

From the abstract:

RIPE Atlas VPs going to different anycast sites when under stress. Colors indicate different sites, with black showing unsuccessful queries. [Moura16b, figure 11b]

Distributed Denial-of-Service (DDoS) attacks continue to be a major threat in the Internet today. DDoS attacks overwhelm target services with requests or other traffic, causing requests from legitimate users to be shut out. A common defense against DDoS is to replicate the service in multiple physical locations or sites. If all sites announce a common IP address, BGP will associate users around the Internet with a nearby site,defining the catchment of that site. Anycast addresses DDoS both by increasing capacity to the aggregate of many sites, and allowing each catchment to contain attack traffic leaving other sites unaffected. IP anycast is widely used for commercial CDNs and essential infrastructure such as DNS, but there is little evaluation of anycast under stress. This paper provides the first evaluation of several anycast services under stress with public data. Our subject is the Internet’s Root Domain Name Service, made up of 13 independently designed services (“letters”, 11 with IP anycast) running at more than 500 sites. Many of these services were stressed by sustained traffic at 100 times normal load on Nov.30 and Dec.1, 2015. We use public data for most of our analysis to examine how different services respond to the these events. We see how different anycast deployments respond to stress, and identify two policies: sites may absorb attack traffic, containing the damage but reducing service to some users, or they may withdraw routes to shift both good and bad traffic to other sites. We study how these deployments policies result in different levels of service to different users. We also show evidence of collateral damage on other services located near the attacks.

This IMC paper is joint work of  Giovane C. M. Moura, Moritz Müller, Cristian Hesselman (SIDN Labs), Ricardo de O. Schmidt, Wouter B. de Vries (U. Twente), John Heidemann, Lan Wei (USC/ISI). Datasets in this paper are derived from RIPE Atlas and are available at http://traces.simpleweb.org/ and at https://ant.isi.edu/datasets/anycast/.

Categories
Software releases

new software dnsanon_rssac

We have released version 1.3 of dnsanon_rssac on 2016-06-13, a tool that processes DNS data seen in packet captures (typcally pcap format) to generate RSSAC-002 statistics reports.

Our tool is at https://ant.isi.edu/software/dnsanon_rssac/index.html, with a description at
https://ant.isi.edu/software/dnsanon_rssac/README.html .  Our tool builds on dnsanon.

The main goal of our implementation is that partial processing can be done independently and then merged. Merging works both for files captured at different times of the day, or at different anycast sites.

Our software stack has run at B-Root since February 2016, and since May 2016 in production use.

To our knowledge, this tool is the first to implement the RSSAC-002v3 specification.

 

Categories
Papers Publications

new workshop paper “BotDigger: Detecting DGA Bots in a Single Network” in TMA 2016

The paper “BotDigger: Detecting DGA Bots in a Single Network” has appeared at the TMA Workshop on April 8, 2016 in Louvain La Neuve, Belgium (available at http://www.cs.colostate.edu/~hanzhang/papers/BotDigger-TMA16.pdf).

The code of BotDigger is available on GitHub at: https://github.com/hanzhang0116/BotDigger

From the abstract:

To improve the resiliency of communication between bots and C&C servers, bot masters began utilizing Domain Generation Algorithms (DGA) in recent years. Many systems have been introduced to detect DGA-based botnets. However, they suffer from several limitations, such as requiring DNS traffic collected across many networks, the presence of multiple bots from the same botnet, and so forth. These limitations mBotDiggerOverviewake it very hard to detect individual bots when using traffic collected from a single network. In this paper, we introduce BotDigger, a system that detects DGA-based bots using DNS traffic without a priori knowledge of the domain generation algorithm. BotDigger utilizes a chain of evidence, including quantity, temporal and linguistic evidence to detect an individual bot by only monitoring traffic at the DNS servers of a single network. We evaluate BotDigger’s performance using traces from two DGA-based botnets: Kraken and Conflicker. Our results show that BotDigger detects all the Kraken bots and 99.8% of Conficker bots. A one-week DNS trace captured from our university and three traces collected from our research lab are used to evaluate false positives. The results show that the false positive rates are 0.05% and 0.39% for these two groups of background traces, respectively.

The work in this paper is by Han Zhang, Manaf Gharaibeh, Spiros Thanasoulas, and Christos Papadopoulos (Colorado State University).

Categories
Presentations

new talk “New Opportunities for Research and Experiments in Internet Naming And Identification” at the AIMS Workshop

John Heidemann gave the talk “New Opportunities for Research and Experiments in Internet Naming And Identification” at the AIMS 2016 workshop at CAIDA, La Jolla, California on February 11, 2016.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann16a.pdf.

Needs for new naming and identity research prompt new research infrastructure, enabling new research directions.
Needs for new naming and identity research prompt new research infrastructure, enabling new research directions.

From the abstract:

DNS is central to Internet use today, yet research on DNS today is challenging: many researchers find it challenging to create realistic experiments at scale and representative of the large installed base, and datasets are often short (two days or less) or otherwise limited. Yes DNS evolution presses on: improvements to privacy are needed, and extensions like DANE provide an opportunity for DNS to improve security and support identity management. We exploring how to grow the research community and enable meaningful work on Internet naming. In this talk we will propose new research infrastructure to support to realistic DNS experiments and longitudinal data studies. We are looking for feedback on our proposed approaches and input about your pressing research problems in Internet naming and identification.

For more information see our project website.

Categories
Publications Technical Report

new technical report “BotDigger: Detecting DGA Bots in a Single Network”

We have released a new technical report “BotDigger: Detecting DGA Bots in a Single Network”, CS-16-101, available at http://www.cs.colostate.edu/~hanzhang/papers/BotDigger-techReport.pdf

The code of BotDigger is available on GitHub at: https://github.com/hanzhang0116/BotDigger

From the abstract:

To improve the resiliency of communication between bots and C&C servers, bot masters began utilizing Domain Generation Algorithms (DGA) in recent years. Many systems have been introduced to detect DGA-based botnets. However, they suffer from several limitations, such as requiring DNS traffic collected across many networks, the presence of multiple bots from the same botnet, and so forth. BotDiggerOverviewThese limitations make it very hard to detect individual bots when using traffic collected from a single network. In this paper, we introduce BotDigger, a system that detects DGA-based bots using DNS traffic without a priori knowledge of the domain generation algorithm. BotDigger utilizes a chain of evidence, including quantity, temporal and linguistic evidence
to detect an individual bot by only monitoring traffic at the DNS servers of a single network. We evaluate BotDigger’s performance using traces from two DGA-based botnets: Kraken and Conflicker. Our results show that BotDigger detects all the Kraken bots and 99.8% of Conficker bots. A one-week DNS trace captured from our university and three traces collected from our research lab are used to evaluate false positives. The results show that the false positive rates are 0.05% and 0.39% for these two groups of background traces, respectively.

This work is by Han Zhang, Manaf Gharaibeh, Spiros Thanasoulas and Christos Papadopoulos (Colorado State University).

Categories
Papers Publications

new conference paper “Measuring the Latency and Pervasiveness of TLS Certificate Revocation” in PAM 2016

The paper “Measuring the Latency and Pervasiveness of TLS Certificate Revocation” will appear at Passive and Active Measurements Conference in March 2016 in Heraklion, Crete, Greece  (available at http://www.isi.edu/~liangzhu/papers/Zhu16a.pdf)

From the abstract:

Today, Transport-Layer Security (TLS) is the bedrock of Internet security for the web and web-derived applications. TLS depends on the X.509 Public Key Infrastructure (PKI) to authenticate endpoint
identity. An essential part of a PKI is the ability to quickly revoke certificates, for example, after a key compromise. Today the Online Certificate Status Protocol (OCSP) is the most common way to quickly distribute revocation information. However, prior and current concerns about OCSP latency and privacy raise questions about its use. We examine OCSP using passive network monitoring of live traffic at the Internet uplink of a large research university and verify the results using active scans. Our measurements show that the median latency of OCSP queries is quite good: only 20 ms today, much less than the 291 ms observed in 2012. This improvement is because content delivery networks (CDNs) serve most OCSP traffic today; our measurements show 94% of queries are served by CDNs. We also show that OCSP use is ubiquitous today: it is used by all popular web browsers, as well as important non-web applications such as MS-Windows code signing.

The work in the paper is by Liang Zhu (USC/ISI), Johanna Amann (ICSI) and John Heidemann (USC/ISI). The active probe dataset in this paper is available upon request.

Categories
Papers Publications

new workshop paper “Privacy Principles for Sharing Cyber Security Data” in IWPE 15

The paper “Privacy Principles for Sharing Cyber Security Data” (available at https://www.isi.edu/~calvin/papers/Fisk15a.pdf) will appear at the International Workshop on Privacy Engineering (co-located with IEEE Symposium on Security and Privacy) on May 21, 2015 in San Jose, California.

From the abstract:

Sharing cyber security data across organizational boundaries brings both privacy risks in the exposure of personal information and data, and organizational risk in disclosing internal information. These risks occur as information leaks in network traffic or logs, and also in queries made across organizations. They are also complicated by the trade-offs in privacy preservation and utility present in anonymization to manage disclosure. In this paper, we define three principles that guide sharing security information across organizations: Least Disclosure, Qualitative Evaluation, and Forward Progress. We then discuss engineering approaches that apply these principles to a distributed security system. Application of these principles can reduce the risk of data exposure and help manage trust requirements for data sharing, helping to meet our goal of balancing privacy, organizational risk, and the ability to better respond to security with shared information.

The work in the paper is by Gina Fisk (LANL), Calvin Ardi (USC/ISI), Neale Pickett (LANL), John Heidemann (USC/ISI), Mike Fisk (LANL), and Christos Papadopoulos (Colorado State). This work is supported by DHS S&T, Cyber Security division.

Categories
Papers Publications

new conference paper “Replay of Malicious Traffic in Network Testbeds” in IEEE Conf. on Technologies for Homeland Security (HST)

The paper “Replay of Malicious Traffic in Network Testbeds” (by Alefiya Hussain, Yuri Pradkin, and John Heidemann) will appear in the 3th IEEE Conference on Technologies for Homeland Security (HST) in Waltham, Mass. in Nov. 2013.  The paper is available at  http://www.isi.edu/~johnh/PAPERS/Hussain13a.

Hussain13a_iconFrom the paper’s abstract:

In this paper we present tools and methods to integrate attack measurements from the Internet with controlled experimentation on a network testbed. We show that this approach provides greater fidelity than synthetic models. We compare the statistical properties of real-world attacks with synthetically generated constant bit rate attacks on the testbed. Our results indicate that trace replay provides fine time-scale details that may be absent in constant bit rate attacks. Additionally, we demonstrate the effectiveness of our approach to study new and emerging attacks. We replay an Internet attack captured by the LANDER system on the DETERLab testbed within two hours.

Data from the paper is available as DoS_DNS_amplification-20130617 from the authors or http://www.predict.org, and the tools are at deterlab).

Categories
Papers Publications

New conference paper “Detecting Encrypted Botnet Traffic” at Global Internet 2013

The paper “Detecting Encrypted Botnet Traffic” was accepted by Global Internet 2013 in Turin, Italy (available at http://www.netsec.colostate.edu/~zhang/DetectingEncryptedBotnetTraffic.pdf)

From the abstract:

Bot detection methods that rely on deep packet in- spection (DPI) can be foiled by encryption. Encryption, however, increases entropy. This paper investigates whether adding high- entropy detectors to an existing bot detection tool that uses DPI can restore some of the bot visibility. We present two high-entropy classifiers, and use one of them to enhance BotHunter. Our results show that while BotHunter misses about 50% of the bots when they employ encryption, our high-entropy classifier restores most of its ability to detect bots, even when they use encryption.

This work is advised by Christos Papadopolous and Dan Massey at Colorado State University.