Categories
Presentations

new talk “New Opportunities for Research and Experiments in Internet Naming And Identification” at the AIMS Workshop

John Heidemann gave the talk “New Opportunities for Research and Experiments in Internet Naming And Identification” at the AIMS 2016 workshop at CAIDA, La Jolla, California on February 11, 2016.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann16a.pdf.

Needs for new naming and identity research prompt new research infrastructure, enabling new research directions.
Needs for new naming and identity research prompt new research infrastructure, enabling new research directions.

From the abstract:

DNS is central to Internet use today, yet research on DNS today is challenging: many researchers find it challenging to create realistic experiments at scale and representative of the large installed base, and datasets are often short (two days or less) or otherwise limited. Yes DNS evolution presses on: improvements to privacy are needed, and extensions like DANE provide an opportunity for DNS to improve security and support identity management. We exploring how to grow the research community and enable meaningful work on Internet naming. In this talk we will propose new research infrastructure to support to realistic DNS experiments and longitudinal data studies. We are looking for feedback on our proposed approaches and input about your pressing research problems in Internet naming and identification.

For more information see our project website.

Categories
Papers Publications

new workshop paper “AuntieTuna: Personalized Content-based Phishing Detection” in USEC 2016

The paper “AuntieTuna: Personalized Content-based Phishing Detection” will appear at the NDSS Usable Security Workshop on February 21, 2016 in San Diego, CA, USA (available at https://www.isi.edu/~calvin/papers/Ardi16a.pdf).

From the abstract:

Implementation diagram of the AuntieTuna anti-phishing plugin.Phishing sites masquerade as copies of legitimate sites (“targets”) to fool people into sharing sensitive information that can then be used for fraud. Current phishing defenses can be ineffective, with training ignored, blacklists of discovered, bad sites too slow to pick up new threats, and whitelists of known-good sites too limiting. We have developed a new technique that automatically builds personalized lists of target sites (candidates that may be copied by phish) and then tests sites as a user browses them. Our approach uses cryptographic hashing of each page’s rendered Document Object Model (DOM), providing a zero false positive rate and identifying more than half of detectable phish in a controlled study. Since each user develops a customized list of target sites, our approach presents a diverse defense against phishers. We have prototyped our approach as a Chrome browser plugin called AuntieTuna, emphasizing usability through automated and simple manual addition of target sites and clean reports of potential phish that include context about the targeted site. AuntieTuna does not slow web browsing time and presents alerts on phishing pages before users can divulge information. Our plugin is open-source and has been in use by a few users for months.

The work in this paper is by Calvin Ardi (USC/ISI) and John Heidemann (USC/ISI).

Categories
Publications Technical Report

new technical report “BotDigger: Detecting DGA Bots in a Single Network”

We have released a new technical report “BotDigger: Detecting DGA Bots in a Single Network”, CS-16-101, available at http://www.cs.colostate.edu/~hanzhang/papers/BotDigger-techReport.pdf

The code of BotDigger is available on GitHub at: https://github.com/hanzhang0116/BotDigger

From the abstract:

To improve the resiliency of communication between bots and C&C servers, bot masters began utilizing Domain Generation Algorithms (DGA) in recent years. Many systems have been introduced to detect DGA-based botnets. However, they suffer from several limitations, such as requiring DNS traffic collected across many networks, the presence of multiple bots from the same botnet, and so forth. BotDiggerOverviewThese limitations make it very hard to detect individual bots when using traffic collected from a single network. In this paper, we introduce BotDigger, a system that detects DGA-based bots using DNS traffic without a priori knowledge of the domain generation algorithm. BotDigger utilizes a chain of evidence, including quantity, temporal and linguistic evidence
to detect an individual bot by only monitoring traffic at the DNS servers of a single network. We evaluate BotDigger’s performance using traces from two DGA-based botnets: Kraken and Conflicker. Our results show that BotDigger detects all the Kraken bots and 99.8% of Conficker bots. A one-week DNS trace captured from our university and three traces collected from our research lab are used to evaluate false positives. The results show that the false positive rates are 0.05% and 0.39% for these two groups of background traces, respectively.

This work is by Han Zhang, Manaf Gharaibeh, Spiros Thanasoulas and Christos Papadopoulos (Colorado State University).

Categories
Papers Publications

new conference paper “Measuring the Latency and Pervasiveness of TLS Certificate Revocation” in PAM 2016

The paper “Measuring the Latency and Pervasiveness of TLS Certificate Revocation” will appear at Passive and Active Measurements Conference in March 2016 in Heraklion, Crete, Greece  (available at http://www.isi.edu/~liangzhu/papers/Zhu16a.pdf)

From the abstract:

Today, Transport-Layer Security (TLS) is the bedrock of Internet security for the web and web-derived applications. TLS depends on the X.509 Public Key Infrastructure (PKI) to authenticate endpoint
identity. An essential part of a PKI is the ability to quickly revoke certificates, for example, after a key compromise. Today the Online Certificate Status Protocol (OCSP) is the most common way to quickly distribute revocation information. However, prior and current concerns about OCSP latency and privacy raise questions about its use. We examine OCSP using passive network monitoring of live traffic at the Internet uplink of a large research university and verify the results using active scans. Our measurements show that the median latency of OCSP queries is quite good: only 20 ms today, much less than the 291 ms observed in 2012. This improvement is because content delivery networks (CDNs) serve most OCSP traffic today; our measurements show 94% of queries are served by CDNs. We also show that OCSP use is ubiquitous today: it is used by all popular web browsers, as well as important non-web applications such as MS-Windows code signing.

The work in the paper is by Liang Zhu (USC/ISI), Johanna Amann (ICSI) and John Heidemann (USC/ISI). The active probe dataset in this paper is available upon request.