Categories
Publications Technical Report

new technical report “Poster: Lightweight Content-based Phishing Detection”

We released a new technical report “Poster: Lightweight Content-based Phishing Detection”, ISI-TR-698, available at http://www.isi.edu/publications/trpublic/files/tr-698.pdf.

The poster abstract and poster (included as part of the technical report) appeared at the poster session at the 36th IEEE Symposium on Security and Privacy in May 2015 in San Jose, CA, USA.

We have released an alpha version of our extension and source code here: http://www.isi.edu/ant/software/phish/.
We would greatly appreciate any help and feedback in testing our plugin!

From the abstract:

blah
Our browser extension hashes the content of a visited page and compares the hashes with a set of known good hashes. If the number of matches exceeds a threshold, the website is suspected as phish and an alert is displayed to the user.

Increasing use of Internet banking and shopping by a broad spectrum of users results in greater potential profits from phishing attacks via websites that masquerade as legitimate sites to trick users into sharing passwords or financial information. Most browsers today detect potential phishing with URL blacklists; while effective at stopping previously known threats, blacklists must react to new threats as they are discovered, leaving users vulnerable for a period of time. Alternatively, whitelists can be used to identify “known-good” websites so that off-list sites (to include possible phish) can never be accessed, but are too limited for many users. Our goal is proactive detection of phishing websites with neither the delay of blacklist identification nor the strict constraints of whitelists. Our approach is to list known phishing targets, index the content at their correct sites, and then look for this content to appear at incorrect sites. Our insight is that cryptographic hashing of page contents allows for efficient bulk identification of content reuse at phishing sites. Our contribution is a system to detect phish by comparing hashes of visited websites to the hashes of the original, known good, legitimate website. We implement our approach as a browser extension in Google Chrome and show that our algorithms detect a majority of phish, even with minimal countermeasures to page obfuscation. A small number of alpha users have been using the extension without issues for several weeks, and we will be releasing our extension and source code upon publication.

Categories
Papers Publications

new conference paper “Connection-Oriented DNS to Improve Privacy and Security” in Oakland 2015

The paper “Connection-Oriented DNS to Improve Privacy and Security” will appear at the 36th IEEE Symposium on Security and Privacy in May 2015 in San Jose, CA, USA  (available at http://www.isi.edu/~liangzhu/papers/Zhu15b.pdf)

From the abstract:end_to_end_model_n_7

The Domain Name System (DNS) seems ideal for connectionless UDP, yet this choice results in challenges of eavesdropping that compromises privacy, source-address spoofing that simplifies denial-of-service (DoS) attacks on the server and third parties, injection attacks that exploit fragmentation, and reply-size limits that constrain key sizes and policy choices. We propose T-DNS to address these problems. It uses TCP to smoothly support large payloads and to mitigate spoofing and amplification for DoS. T-DNS uses transport-layer security (TLS) to provide privacy from users to their DNS resolvers and optionally to authoritative servers. TCP and TLS are hardly novel, and expectations about DNS suggest connections will balloon client latency and overwhelm server with state. Our contribution is to show that T-DNS significantly improves security and privacy: TCP prevents denial-of-service (DoS) amplification against others, reduces the effects of DoS on the server, and simplifies policy choices about key size. TLS protects against eavesdroppers to the recursive resolver. Our second contribution is to show that with careful implementation choices, these benefits come at only modest cost: end-to-end latency from TLS to the recursive resolver is only about 9% slower when UDP is used to the authoritative server, and 22% slower with TCP to the authoritative. With diverse traces we show that connection reuse can be frequent (60–95% for stub and recursive resolvers, although half that for authoritative servers), and after connection establishment, experiments show that TCP and TLS latency is equivalent to UDP. With conservative timeouts (20 s at authoritative servers and 60 s elsewhere) and estimated per-connection memory, we show that server memory requirements match current hardware: a large recursive resolver may have 24k active connections requiring about 3.6 GB additional RAM. Good performance requires key design and implementation decisions we identify: query pipelining, out-of-order responses, TCP fast-open and TLS connection resumption, and plausible timeouts.

The work in the paper is by Liang Zhu, Zi Hu and John Heidemann (USC/ISI), Duane Wessels and Allison Mankin (both of Verisign Labs), and Nikita Somaiya (USC/ISI).  Earlier versions of this paper were released as ISI-TR-688 and ISI-TR-693; this paper adds results and supercedes that work.

The data in this paper is available to researchers at no cost on request. Please see T-DNS-experiments-20140324 at dataset page.

Categories
Papers Publications

new workshop paper “Privacy Principles for Sharing Cyber Security Data” in IWPE 15

The paper “Privacy Principles for Sharing Cyber Security Data” (available at https://www.isi.edu/~calvin/papers/Fisk15a.pdf) will appear at the International Workshop on Privacy Engineering (co-located with IEEE Symposium on Security and Privacy) on May 21, 2015 in San Jose, California.

From the abstract:

Sharing cyber security data across organizational boundaries brings both privacy risks in the exposure of personal information and data, and organizational risk in disclosing internal information. These risks occur as information leaks in network traffic or logs, and also in queries made across organizations. They are also complicated by the trade-offs in privacy preservation and utility present in anonymization to manage disclosure. In this paper, we define three principles that guide sharing security information across organizations: Least Disclosure, Qualitative Evaluation, and Forward Progress. We then discuss engineering approaches that apply these principles to a distributed security system. Application of these principles can reduce the risk of data exposure and help manage trust requirements for data sharing, helping to meet our goal of balancing privacy, organizational risk, and the ability to better respond to security with shared information.

The work in the paper is by Gina Fisk (LANL), Calvin Ardi (USC/ISI), Neale Pickett (LANL), John Heidemann (USC/ISI), Mike Fisk (LANL), and Christos Papadopoulos (Colorado State). This work is supported by DHS S&T, Cyber Security division.

Categories
Papers Publications

new workshop paper “Assessing Affinity Between Users and CDN Sites” in TMA 2015

The paper “Assessing Affinity Between Users and CDN Sites” (available at http://www.isi.edu/~xunfan/research/Fan15a.pdf) will appear at the Traffic Monitoring and Analysis Workshop in April 2015 in Barcelona, Spain.

From the abstract:

count_cid_per_clientLarge web services employ CDNs to improve user performance. CDNs improve performance by serving users from nearby FrontEnd (FE) Clusters. They also spread users across FE Clusters when one is overloaded or unavailable and others have unused capacity. Our paper is the first to study the dynamics of the user-to-FE Cluster mapping for Google and Akamai from a large range of client prefixes. We measure how 32,000 prefixes associate with FE Clusters in their CDNs every 15 minutes for more than a month. We study geographic and latency effects of mapping changes, showing that 50–70% of prefixes switch between FE Clusters that are very distant from each other (more than 1,000 km), and that these shifts sometimes (28–40% of the time) result in large latency shifts (100 ms or more). Most prefixes see large latencies only briefly, but a few (2–5%) see high latency much of the time. We also find that many prefixes are directed to several countries over the course of a month, complicating questions of jurisdiction.

Citation: Xun Fan, Ethan Katz-Bassett and John Heidemann.Assessing Affinity Between Users and CDN Sites. To appear in Traffic Monitoring and Analysis Workshop. Barcelona, Spain. April, 2015.

All data in this paper is available to researchers at no cost on request. Please see our CDN affinity dataset webpage.

This research is partially sponsored by the Department of Homeland Security (DHS) Science and Technology Directorate, HSARPA, Cyber Security Division, BAA 11-01-RIKA and Air Force Re-search Laboratory, Information Directorate under agreement number FA8750-12-2-0344, NSF CNS-1351100, and via SPAWAR Systems Center Pacific under Contract No. N66001-13-C-3001. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwith-standing any copyright notation thereon. The views contained herein are those of the authors and
do not necessarily represent those of DHS or the U.S. Government.

 

Categories
Presentations

new animation: the August 2014 Time Warner outage

Global network outages on 2014-08-27 during the Time Warner event in the U.S.
Global network outages on 2014-08-27 during the Time Warner event in the U.S.

On August 27, 2014, Time Warner suffered a network outage that affected about 11 million customers for more than two hours (making national news). We have observing global network outages since December 2013, including this outage.

We recently animated this August Time Warner outage.

We see that the Time Warner outage lasted about two hours and affected a good swath of the United States. We caution that all large network operators have occasional outages–this animation is not intended to complain about Time Warner, but to illustrate the need to have tools that can detect and visualize national-level outages.  It also puts the outage into context: we can see a few other outages in Uruguay, Brazil, and Saudi Arabia.

This analysis uses dataset usc-lander /internet_outage_adaptive_a17all-20140701, available for research use from PREDICT, or by request from us if PREDICT access is not possible.

This animation was first shown at the Dec. 2014 DHS Cyber Security Division R&D Showcase and Technical Workshop as part of the talk “Towards Understanding Internet Reliability” given by John Heidemann. This work was supported by DHS, most recently through the LACREND project.

Categories
Presentations

new animation: a sample of U.S. networks, before and after Hurricane Sandy

In October 2012, Hurricane Sandy made landfall on the U.S. East Coast causing widespread power outages. We were able to see the effects of Hurricane Sandy by analyzing active probing of the Internet. We first reported this work in a technical report and then with more refined analysis in a peer-reviewed paper.

Network outages for a sample of U.s. East Coast networks on the day after Hurricane Sandy made landfall.
Network outages for a sample of U.s. East Coast networks on the day after Hurricane Sandy made landfall.

We recently animated our data showing Hurricane Sandy landfall.

These 4 days before landfall and 7 after show some intersting results: On the day of landfall we see about three-times the number of outages relative to “typical” U.S. networks. Finally, we see it takes about four days to recover back to typical conditions.

This analysis uses dataset usc-lander / internet_address_survey_reprobing_it50j, available for research use from PREDICT, or by request from us if PREDICT access is not possible.

This animation was first shown at the Dec. 2014 DHS Cyber Security Division R&D Showcase and Technical Workshop as part of the talk “Towards Understanding Internet Reliability” given by John Heidemann. This work was supported by DHS, most recently through the LACREND project.

Categories
Presentations

new animation: eight years of Internet IPv4 Censuses

We’ve been taking Internet IPv4 censuses regularly since 2006.  In each census, we probe the entire allocated IPv4 address space.  You may browse 8 years of data at our IPv4 address browser.

A still image from our animation of 8 years of IPv4 censuses.
A still image from our animation of 8 years of IPv4 censuses.

We recently put together an animation showing 8 years of IPv4 censuses, from 2006 through 2014.

These eight years show some interesting events, from an early “open” Internet in 2006, to full allocation of IPv4 by ICANN in 2011, to higher utilization in 2014.

All data shown here can be browsed at our website.
Data is available for research use from PREDICT or by request from us if PREDICT access is not possible.

This animation was first shown at the Dec. 2014 DHS Cyber Security Division R&D Showcase and Technical Workshop as part of the talk “Towards Understanding Internet Reliability” given by John Heidemann.  This work was supported by DHS, most recently through the LACREND project.

Categories
Papers Publications

new workshop paper “Measuring DANE TLSA Deployment” in TMA 2015

The paper “Measuring DANE TLSA Deployment” will appear at the Traffic Monitoring and Analysis Workshop in April 2015 in Barcelona, Spain (available at http://www.isi.edu/~liangzhu/papers/dane_tlsa.pdf).

From the abstract:

The DANE (DNS-based Authentication of Named Entities) framework uses DNSSEC to provide a source of trust, and with TLSA it can serve as a root of trust for TLS certificates. This serves to complement traditional certificate authentication methods, which is important given the risks inherent in trusting hundreds of organizations—risks already demonstrated with multiple compromises. The TLSA protocol was published in 2012, and this paper presents the first systematic study of its deployment. We studied TLSA usage, developing a tool that actively probes all signed zones in .com and .net for TLSA records. We find the TLSA use is early: in our latest measurement, of the 485k signed zones, we find only 997 TLSA names. We characterize how it is being used so far, and find that around 7–13% of TLSA records are invalid. We find 33% of TLSA responses are larger than 1500 Bytes and will very likely be fragmented.

The work in the paper is by Liang Zhu (USC/ISI), Duane Wessels and Allison Mankin (both of Verisign Labs), and John Heidemann (USC/ISI).

Categories
Software releases

Digit-1.1 release

Digit-1.1 has been released  (available at https://ant.isi.edu/software/tdns/index.htmlScreenshot from 2014-11-08 16:17:45).  Digit is a DNS client side tool that can perform DNS queries via different protocols such as UDP, TCP, TLS. This tool is primarily designed to evaluate the client side latency of using DNS over TCP/TLS, as described in the technical report “T-DNS: Connection-Oriented DNS to Improve Privacy and Security” (http://www.isi.edu/~johnh/PAPERS/Zhu14b/index.html).

A README in the package has detailed instructions about how to use this software.

Categories
Presentations

new talk “Internet Populations (Good and Bad): Measurement, Estimation, and Correlation” at the ICERM Workshop on Cybersecurity

John Heidemann gave the talk “Internet Populations (Good and Bad): Measurement, Estimation, and Correlation” at the ICERM Workshop on Cybersecurity at Brown University, Providence, Rhode Island on October 22, 2014. Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann14e/.

Can we improve the mathematical tools we use to measure and understand the Internet?
Can we improve the mathematical tools we use to measure and understand the Internet?

From the abstract:

Our research studies the Internet’s public face. Since 2006 we have been taking censuses of the Internet address space (pinging all IPv4 addresses) every 3 months. Since 2012 we have studied network outages and events like Hurricane Sandy, using probes of much of the Internet every 11 minutes. Most recently we have evaluated the diurnal Internet, finding countries where most people turn off their computers at night. Finally, we have looked at network reputation, identifying how spam generation correlates with network location, and others have studies multiple measurements of “network reputation”.

A common theme across this work is one must estimate characteristics of the edge of the Internet in spite of noisy measurements and a underlying changes. One also need to compare and correlate these imperfect measurements with other factors (from GDP to telecommunications policies).

How do these applications relate to the mathematics of census taking and measurement, estimation, and correlation? Are there tools we should be using that we aren’t? Do the properties of the Internet suggest new approaches (for example where rapid full enumeration is possible)? Does correlation and estimates of network “badness” help us improve cybersecurity by treating different parts of the network differently?