Categories
Publications Technical Report

new technical report “Web-scale Content Reuse Detection (extended)”

We released a new technical report “Web-scale Content Reuse Detection (extended)”, ISI-TR-2014-692, available at http://www.isi.edu/publications/trpublic/files/tr-692.pdf.

From the abstract:

Discovering the amount of chunk-level duplication in Geocities (2008/2009, 97M chunks, Fig. 11).
Discovering the amount of chunk-level duplication in Geocities (2008/2009, 97M chunks, Fig. 11).

With the vast amount of accessible, online content, it is not surprising that unscrupulous entities “borrow” from the web to provide filler for advertisements, link farms, and spam and make a quick profit. Our insight is that cryptographic hashing and fingerprinting can efficiently identify content reuse for web-size corpora. We develop two related algorithms, one to automatically discover previously unknown duplicate content in the web, and the second to detect copies of discovered or manually identified content in the web. Our detection can also bad neighborhoods, clusters of pages where copied content is frequent. We verify our approach with controlled experiments with two large datasets: a Common Crawl subset the web, and a copy of Geocities, an older set of user-provided web content. We then demonstrate that we can discover otherwise unknown examples of duplication for spam, and detect both discovered and expert-identified content in these large datasets. Utilizing an original copy of Wikipedia as identified content, we find 40 sites that reuse this content, 86% for commercial benefit.

Categories
Publications Technical Report

new technical report “T-DNS: Connection-Oriented DNS to Improve Privacy and Security (extended)”

We released a new technical report “T-DNS: Connection-Oriented DNS to Improve Privacy and Security (extended)”, ISI-TR-2014-693, available as http://www.isi.edu/~johnh/PAPERS/Zhu14b.pdf

From the abstract:resp_cdf_diff_key_all

DNS is the canonical protocol for connectionless UDP. Yet DNS today is challenged by eavesdropping that compromises privacy, source-address spoofing that results in denial-of-service (DoS) attacks on the server and third parties, injection attacks that exploit fragmentation, and size limitations that constrain policy and operational choices. We propose T-DNS to address these problems. It uses TCP to smoothly support large payloads and to mitigate spoofing and amplification for DoS. T-DNS uses transport-layer security (TLS) to provide privacy from users to their DNS resolvers and optionally to authoritative servers. Expectations about DNS suggest connections will balloon client latency and overwhelm server with state, but our evaluation shows costs are modest: end-to-end latency from TLS to the recursive resolver is only about 9% slower when UDP is used to the authoritative server, and 22% slower with TCP to the authoritative. With diverse traces we show that frequent connection reuse is possible (60–95% for stub and recursive resolvers, although half that for authoritative servers), and after connection establishment, we show TCP and TLS latency is equivalent to UDP. With conservative timeouts (20 s at authoritative servers and 60 s elsewhere) and conservative estimates of connection state memory requirements, we show that server memory requirements match current hardware: a large recursive resolver may have 24k active connections requiring about 3.6 GB additional RAM. We identify the key design and implementation decisions needed to minimize overhead: query pipelining, out-of-order responses, TLS connection resumption, and plausible timeouts.

This paper is a major revision of the prior technical report ISI-TR-2014-688. Since that work we have improved our understanding of the availability of TCP fast open and TLS resumption, and we have tightened our estimates on memory based on external reports (section 5.2). This additional information has allowed us to conduct additional experiments, improve our modeling, and provide a more accurate view of what is possible today; our estimates of latency and memory consumption are both lower than in our prior technical report as a result. We have also added additional information about packet size limitations (Figure 2), experiments evaluating DNSCrypt/DNSCurve (section 6.1), analysis of DTLS, and covered a broader range of RTTs in our experiments. We believe these additions strengthen our central claims: that connectionless DNS causes multiple problems and that T-DNS addresses those problems with modest increase in latency and memory suitable for current hardware.

Categories
Publications Technical Report

new technical report “When the Internet Sleeps: Correlating Diurnal Networks With External Factors (extended)”

We released a new technical report “When the Internet Sleeps: Correlating Diurnal Networks With External Factors (extended)”, ISI-TR-2014-691, by Lin Quan, John Heidemann, and Yuri Pradkin, available as http://www.isi.edu/~johnh/PAPERS/Quan14b.
pdf

Comparing observed diurnal phase and geolocation longitude for 287k geolocatable, diurnal blocks ([Quan14b], figure 14b)
Comparing observed diurnal phase and geolocation longitude for 287k geolocatable, diurnal blocks ([Quan14b], figure 14b)
From the abstract:

As the Internet matures, policy questions loom larger in its operation. When should an ISP, city, or government invest in infrastructure? How do their policies affect use? In this work, we develop a new approach to evaluate how policies, economic conditions and technology correlates with Internet use around the world. First, we develop an adaptive and accurate approach to estimate block availability, the fraction of active IP addresses in each /24 block over short timescales (every 11 minutes). Our estimator provides a new lens to interpret data taken from existing long-term outage measurements, this requiring no no additional traffic. (If new collection was required, it would be lightweight, since on average, outage detection requires less than 20 probes per hour per /24 block; less than 1% of background radiation.) Second, we show that spectral analysis of this measure can identify diurnal usage: blocks where addresses are regularly used during part of the day and idle in other times. Finally, we analyze data for the entire responsive Internet (3.7M /24 blocks) over 35 days. These global observations show when and where the Internet sleeps—networks are mostly always-on in the US and Western Europe, and diurnal in much of Asia, South America, and Eastern Europe. ANOVA testing shows that diurnal networks correlate negatively with country GDP and electrical consumption, quantifying that national policies and economics relate to networks.

Data from this paper is available from http://www.isi.edu/ant/traces/internet_otuages/index.html, and from http://www.predict.org as dataset internet_outage_adaptive_a12w-20130424.

Categories
Presentations

new talk “T-DNS: Connection-Oriented DNS to Improve Privacy and Security” given at DNS-OARC

John Heidemann gave the talk “T-DNS: Connection-Oriented DNS to Improve Privacy and Security” given at the Spring DNS-OARC meeting in Warsaw, Poland on May 10, 2014.  Slides are available at http://www.isi.edu/~johnh/PAPERS/Heidemann14c.html.

don't fear connections for DNS
don’t fear connections for DNS

From the abstract:

This talk will discuss connection-oriented DNS to improve DNS security and privacy. DNS is the canonical example of a connectionless, single packet, request/response protocol, with UDP as its dominant transport. Yet DNS today is challenged by eavesdropping that compromises privacy, source-address spoofing that results in denial-of-service (DoS) attacks on the server and third parties, injection attacks that exploit fragmentation, and size limitations that constrain policy and operational choices. We propose t-DNS to address these problems: it uses TCP to smoothly support large payloads and mitigate spoofing and amplification for DoS. T-DNS uses transport-layer security (TLS) to provide privacy from users to their DNS resolvers and optionally to authoritative servers.

Traditional wisdom is that connection setup will balloon latency for clients and overwhelm servers. We provide data to show that these assumptions are overblown–our model of end-to-end latency shows TLS to the recursive resolver is only about 5-24% slower, with UDP to the authoritative server. End-to-end latency is 19-33% slower with TLS to recursive and TCP to authoritative. Experiments behind these models show that after connection establishment, TCP and TLS latency is equivalent to UDP. Using diverse trace data we show that frequent connection reuse is possible (60-95% for stub and recursive resolvers, although half that for authoritative servers). With conservative timeouts (20 s at authoritative servers and 60 s elsewhere) we show that : a large recursive resolver may have 25k active connections consuming about 9 GB of RAM. These results depend on specific design and implementation decisions–query pipelining, out-of-order responses, TLS connection resumption, and plausible timeouts.

We hope to solicit feedback from the OARC community about this work to understand design and operational concerns if T-DNS deployment was widespread. The work in the talk is by Liang Zhu, Zi Hu, and John Heidemann (all of USC/ISI), Duane Wessels and Allison Mankin (both of Verisign), and Nikita Somaiya (USC/ISI).

A technical report describing the work is at http://www.isi.edu/ johnh/PAPERS/Zhu14a.pdf and the protocol changes are described ashttp://datatracker.ietf.org/doc/draft-hzhwm-start-tls-for-dns/.

Categories
Publications Technical Report

new technical report “T-DNS: Connection-Oriented DNS to Improve Privacy and Security”

We released a new technical report “T-DNS: Connection-Oriented DNS to Improve Privacy and Security”, ISI-TR-2014-688, available as http://www.isi.edu/~johnh/PAPERS/Zhu14a.pdf

 

From the abstract:sim_hit_server_median_all

This paper explores connection-oriented DNS to improve DNS security and privacy. DNS is the canonical example of a connectionless, single packet, request/response protocol, with UDP as its dominant transport. Yet DNS today is challenged by eavesdropping that compromises privacy, source-address spoofing that results in denial-of-service (DoS) attacks on the server and third parties, injection attacks that exploit fragmentation, and size limitations that constrain policy and operational choices. We propose t-DNS to address these problems: it combines TCP to smoothly support large payloads and mitigate spoofing and amplification for DoS. T-DNS uses transport-layer security (TLS) to provide privacy from users to their DNS resolvers and optionally to authoritative servers. Traditional wisdom is that connection setup will balloon latency for clients and overwhelm servers. These are myths—our model of end-to-end latency shows TLS to the recursive resolver is only about 21% slower, with UDP to the authoritative server. End-to-end latency is 90% slower with TLS to recursive and TCP to authoritative. Experiments behind these models show that after connection establishment, TCP and TLS latency is equivalent to UDP. Using diverse trace data we show that frequent connection reuse is possible (60–95% for stub and recursive resolvers, although half that for authoritative servers). With conservative timeouts (20 s at authoritative servers and 60 s elsewhere) we show that server memory requirements match current hardware: a large recursive resolver may have 25k active connections consuming about 9 GB of RAM. We identify the key design and implementation decisions needed to minimize overhead—query pipelining, out-of-order responses, TLS connection resumption, and plausible timeouts.

 

Categories
Presentations

new video “A Retrospective on an Australian Routing Event”

On 2012-02-23, hardware problems in an Australian ISP (Dodo) router caused it to announce many global routes to their ISP (Telstra), and from there to others.

The result: for 45 minutes, millions of Australians lost international Internet connectivity.

While this problem was detected and corrected in less than an hour, this kind of problem can reoccur.

In this video we show the Internet address space (IPv4) from Sydney, Australia.   Colors show estimated physical location (blue: North America, Red: Europe, Green: Asia).   Addresses map to a Hilbert Curve, and nearby addresses form squares.  White boxes show routing changes, with bursts after 02:40 UTC.

In the visualization we see there are many, many routing changes for much of Internet (the many white boxes)–evidence of routing instability in Sydney.

A copy of this video is also available at Vimeo (some system may have problems viewing the above embedded video, but Vimeo is a good alternative).

This video was made by Kaustubh Gadkari, John Heidemann, Cathie Olschanowsky, Christos Papadopoulos, Yuri Pradkin, and Lawrence Weikum at University of Southern California/Information Sciences Institute (USC/ISI) and Colorado State University/Computer Science (CSU).

This video uses software developed at USC/ISI and CSU:  Retro-future Time Travel, the LANDER IPv4 Web Address Browser, and BGPMon, the BGP logging and monitor.  Data from this video is available from BGPMon and PREDICT (or the authors).

This work was supported by DHS S&T (BGPMon, contract N66001-08-C-2028; LANDER, contract D08PC75599, admin. by SPAWAR; LACREND, contract FA8750-12-2-0344, admin. by AFRL; Retro-future, contract N66001-13-C-3001, admin. by SPAWAR), and NSF/CISE (BGPMon, grant CNS-1305404).  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of funding and administrative agencies.

Categories
Papers Publications

new conference paper “The Need for End-to-End Evaluation of Cloud Availability” in PAM 2014

The paper “The Need for End-to-End Evaluation of Cloud Availability” was published by PAM 2014 in Marina del Rey, CA (available at http://www.isi.edu/~zihu/paper/cloud_availability.pdf).

From the abstract:cloud_availability_blog

People’s computing lives are moving into the cloud, making understanding cloud availability increasingly critical. Prior studies of Internet outages have used ICMP-based pings and traceroutes. While these studies can detect network availability, we show that they can be inaccurate at estimating cloud availability. Without care, ICMP probes can underestimate availability because ICMP is not as robust as application-level measurements such as HTTP. They can overestimate availability if they measure reachability of the cloud’s edge, missing failures in the cloud’s back-end. We develop methodologies sensitive to five “nines” of reliability, and then we compare ICMP and end-to-end measurements for both cloud VM and storage services. We show case studies where one fails and the other succeeds, and our results highlight the importance of application-level retries to reach high precision. When possible, we recommend end-to-end measurement with application-level protocols to evaluate the availability of cloud services.

Citation: Zi Hu, Liang Zhu, Calvin Ardi, Ethan Katz-Bassett, Harsha Madhyastha, John Heidemann, Minlan Yu. The Need for End-to-End Evaluation of Cloud Availability. Passive and Active Measurements Conference (PAM). Los Angeles, CA, USA, March 2014.

Categories
Presentations

keynote “Sharing Network Data: Bright Gray Days Ahead” given at Passive and Active Measurement Conference

I’m honored to have been invited to give the keynote talk “Sharing Network Data: Bright Gray Days Ahead” at the Passive and  Active Measurement Conference 2014 in Marina del Rey.

A copy of the talk slides are at http://www.isi.edu/~johnh/PAPERS/Heidemann14b (pdf)

some brighter alternatives
Some alternatives, perhaps brighter than the gray of standard anonymization.

From the talk’s abstract:

Sharing data is what we expect as a community. From the IMC best paper award requiring a public dataset to NSF data management plans, we know that data is crucial to reproducible science. Yet privacy concerns today make data acquisition difficult and sharing harder still. AOL and Netflix have released anonymized datasets that leaked customer information, at least for a few customers and with some effort. With the EU suggesting that IP addresses are personally identifiable information, are we doomed to IP-address free “Internet” datasets?
In this talk I will explore the issues in data sharing, suggesting that we need to move beyond black and white definitions of private and public datasets, to embrace the gray shades of data sharing in our future. Gray need not be gloomy. I will discuss some new ideas in sharing that suggest that, if we move beyond “anonymous ftp” as our definition, the future may be gray but bright.

This talk did not generate new datasets, but it grows out of our experiences distributing data through several research projects (such as LANDER and LACREND, both part of the DHS PREDICT program) mentioned in the talk with data available http://www.isi.edu/ant/traces/.  This talk represents my on opinions, not those of these projects or their sponsors.

Categories
Students

congratulations to Lin Quan for his new PhD

I would like to congratulate Dr. Lin Quan for defending his PhD in Dec. 2013 and his doctoral disseration “Learning about the Internet through Efficient Sampling and Aggregation” in Jan. 2014.

Lin Quan (left) and John Heidemann, after Lin's PhD defense.
Lin Quan (left) and John Heidemann, after Lin’s PhD defense.

From the abstract:

The Internet is important for nearly all aspects of our society, affecting ordinary people, businesses, and social activities. Because of its importance and wide-spread applications, we want to have good knowledge about Internet’s operation, reliability and performance, through various kinds of measurements. However, despite the wide usage, we only have limited knowledge of its overall performance and reliability. The first reason of this limited knowledge is that there is no central governance of the Internet, making both active and passive measurements hard. The second reason is the huge scale of the Internet. This makes brute-force analysis hard because of practical computing resource limits such as CPU, memory and probe rate.

This thesis states that sampling and aggregation are necessary to overcome resource constraints in time and space to learn about better knowledge of the Internet. Many other Internet measurement studies also utilize sampling and aggregation techniques to discover properties of the Internet. We distinguish our work by exploring novel mechanisms and new knowledge in several specific areas. First, we aggregate short-time-scale observations and use an efficient multi-time-scale query scheme to discover the properties and reasons of long-lived Internet flows. Second, we sample and probe /24 blocks in the IPv4 address space, and use greedy clustering algorithms to efficiently characterize Internet outages. Third, we show an efficient and effective aggregation technique by visualization and clustering. This technique makes both manual inspection and automated characterization easier. Last, we develop an adaptive probing system to study global scale Internet reliability. It samples and adapts probe rate within each /24 block for accurate beliefs. By aggregation and correlation to other domains, we are also able to study broader policy effects on Internet use, such as political causes, economic conditions, and access technologies.

This thesis provides several examples of Internet knowledge discovery with new mechanisms of sampling and aggregation techniques. We believe our approaches of new sampling and aggregation mechanisms can be used by and will inspire new ways for future Internet measurement systems to overcome resource constraints, such as large amount and dispersed data.

 

Categories
Social

ANT research group lunch

In early December we had a ANT research group lunch to celebrate recent PhD defenses (Xue Cai and Lin Quan) and graduates (Chengjie Zhang).  As a special treat alumnae Alefiya Hussain and and Genevieve Bartlett joined us.  A yummy lunch and a great occasion!

ANT Project members, Dec. 2013
ANT Project members, Dec. 2013