Categories
Publications Students

congratulations to Lan Wei for her new PhD

I would like to congratulate Dr. Lan Wei for defending her PhD in September 2020 and completing her doctoral dissertation “Anycast Stability, Security and Latency in The Domain Name System (DNS) and Content Deliver Networks (CDNs)” in December 2020.

From the abstract:

Clients’ performance is important for both Content-Delivery Networks (CDNs) and the Domain Name System (DNS). Operators would like the service to meet expectations of their users. CDNs providing stable connections will prevent users from experiencing downloading pause from connection breaks. Users expect DNS traffic to be secure without being intercepted or injected. Both CDN and DNS operators care about a short network latency, since users can become frustrated by slow replies.


Many CDNs and DNS services (such as the DNS root) use IP anycast to bring content closer to users. Anycast-based services announce the same IP address(es) from globally distributed sites. In an anycast infrastructure, Internet routing protocols will direct users to a nearby site naturally. The path between a user and an anycast site is formed on a hop-to-hop basis—at each hop} (a network device such as a router), routing protocols like Border Gateway Protocol (BGP) makes the decision about which next hop to go to. ISPs at each hop will impose their routing policies to influence BGP’s decisions. Without globally knowing (also unable to modify) the distributed information of BGP routing table of every ISP on the path, anycast infrastructure operators are unable to predict and control in real-time which specific site a user will visit and what the routing path will look like. Also, any change in routing policy along the path may change both the path and the site visited by a user. We refer to such minimal control over routing towards an anycast service, the uncertainty of anycast routing. Using anycast spares extra traffic management to map users to sites, but can operators provide a good anycast-based service without precise control over the routing?


This routing uncertainty raises three concerns: routing can change, breaking connections; uncertainty about global routing means spoofing can go undetected, and lack of knowledge of global routing can lead to suboptimal latency. In this thesis, we show how we confirm the stability, how we confirm the security, and how we improve the latency of anycast to answer these three concerns. First, routing changes can cause users to switch sites, and therefore break a stateful connection such as a TCP connection immediately. We study routing stability and demonstrate that connections in anycast infrastructure are rarely broken by routing instability. Of all vantage points (VPs), fewer than 0.15% VP’s TCP connections frequently break due to timeout in 5s during all 17 hours we observed. We only observe such frequent TCP connection break in 1 service out of all 12 anycast services studied. A second problem is DNS spoofing, where a third-party can intercept the DNS query and return a false answer. We examine DNS spoofing to study two aspects of security–integrity and privacy, and we design an algorithm to detect spoofing and distinguish different mechanisms to spoof anycast-based DNS. We show that DNS spoofing is uncommon, happening to only 1.7% of all VPs, although increasing over the years. Among all three ways to spoof DNS–injections, proxies, and third-party anycast site (prefix hijack), we show that third-party anycast site is the least popular one. Last, diagnosing poor latency and improving the latency can be difficult for CDNs. We develop a new approach, BAUP (bidirectional anycast unicast probing), which detects inefficient routing with better routing replacement provided. We use BAUP to study anycast latency. By applying BAUP and changing peering policies, a commercial CDN is able to significantly reduce latency, cutting median latency in half from 40ms to 16ms for regional users.

Lan defended her PhD when USC was on work-from-home due to COVID-19; she is the third ANT student with a fully on-line PhD defense.

Categories
Papers Publications

new journal paper “Plumb: Efficient Stream Processing of Multi-User Pipelines” in the Journal of Software: Practice and Experience

We have published a new journal paper “Plumb: Efficient Stream Processing of Multi-User Pipelines” in Wiley’s Journal of Software: Practice and Experience, available at https://onlinelibrary.wiley.com/doi/10.1002/spe.2909

Plumb provides a new pipeline-graph abstraction that allows multiple users to specify workflows in which Plumb can detect and elimiate duplicate processing and handle processing skew due to unbalanced data or stages. The end result is that users get their results faster and a shared cluster is efficiently utilized.

From the abstract of our journal paper:

Operational services run 24×7 and require analytics pipelines to evaluate performance. In mature services such as DNS, these pipelines often grow to many stages developed by multiple, loosely-coupled teams. Such pipelines pose two problems: first, computation and data storage may be duplicated across components developed by different groups, wasting resources. Second, processing can be skewed, with structural skew occurring when different pipeline stages need different amounts of resources, and computational skew occurring when a block of input data requires increased resources. Duplication and structural skew both decrease efficiency, increasing cost, latency, or both. Computational skew can cause pipeline failure or deadlock when resource consumption balloons; we have seen cases where pessimal traffic increases CPU requirements 6-fold. Detecting duplication is challenging when components from multiple teams evolve independently and require fault isolation. Skew management is hard due to dynamic workloads coupled with the conflicting goals of both minimizing latency and maximizing utilization. We propose Plumb, a framework to abstract stream processing as large-block streaming (LBS) for a multi-stage, multi-user workflow. Plumb users express analytics as a DAG of processing modules, allowing Plumb to integrate and optimize workflows from multiple users. Many real-world applications map to the LBS abstraction. Plumb detects and eliminates duplicate computation and storage, and it detects and addresses both structural and computational skew by tracking computation across the pipeline. We exercise Plumb using the analytics pipeline for B-Root DNS. We compare Plumb to a hand-tuned system, cutting latency to one-third the original, and requiring 39% fewer container hours, while supporting more flexible, multi-user analytics and providing greater robustness to DDoS-driven demands.

This journal paper is joint work of Abdul Qadeer and  John Heidemann from USC/ISI.

Plumb is open source software and we will be interested in beta testers. Please contact us if you think it would be useful to manage your workflows over one or a cluster of computers.

Categories
Internet Outages

Observing the CenturyLink outage on 2020-08-30

CenturyLink / Level3 was reported to have a major outage on Sunday, 2020-08-30 (as reported on CNN and discussed on slashdot).

This outage was very clear in our Trinocular near-real-time outage detection system. We have summarized the details with images, before, during, and after, and an animation of the nearly 7-hour event or see the event on our near-real-time outage website.

This outage is one of the largest U.S. nation-wide events since the 2014-08-27 Time Warner outage.

Categories
Presentations

Deep Dive into DNS at IETF108

The Domain Name System (DNS) is responsible for handling the initial steps of almost all connections on the Internet. USC/ISI’s Wes Hardaker, along with Geoff Houston and Joao Damas from APNIC, gave a “Deep Dive” presentation on how the DNS works at the 108th IETF conference. The recording is available on YouTube for those that missed it.

Categories
Papers Publications

new journal paper “Detecting IoT Devices in the Internet” in IEEE/ACM Transactions on Networking

We have published a new journal paper “Detecting IoT Devices in the Internet” in IEEE/ACM Transactions on Networking, available at https://www.isi.edu/~johnh/PAPERS/Guo20c.pdf

Figure 5 from [Guo20c] showing per-device-type AS penetrations from 2013 to 2018 for 16 of the 23 device types we studies (omitting 7 device types appearing in less than10 ASes)

From the abstract of our journal paper:

Distributed Denial-of-Service (DDoS) attacks launched from compromised Internet-of-Things (IoT) devices have shown how vulnerable the Internet is to largescale DDoS attacks. To understand the risks of these attacks requires learning about these IoT devices: where are they? how many are there? how are they changing? This paper describes three new methods to find IoT devices on the Internet: server IP addresses in traffic, server names in DNS queries, and manufacturer information in TLS certificates. Our primary methods (IP addresses and DNS names) use knowledge of servers run by the manufacturers of these devices. Our third method uses TLS certificates obtained by active scanning. We have applied our algorithms to a number of observations. With our IP-based algorithm, we report detections from a university campus over 4 months and from traffic transiting an IXP over 10 days. We apply our DNS-based algorithm to traffic from 8 root DNS servers from 2013 to 2018 to study AS-level IoT deployment. We find substantial growth (about 3.5×) in AS penetration for 23 types of IoT devices and modest increase in device type density for ASes detected with these device types (at most 2 device types in 80% of these ASes in 2018). DNS also shows substantial growth in IoT deployment in residential households from 2013 to 2017. Our certificate-based algorithm finds 254k IP cameras and network video recorders from 199 countries around the world.

We make operational traffic we captured from 10 IoT devices we own public at https://ant.isi.edu/datasets/iot/. We also use operational traffic of 21 IoT devices shared by University of New South Wales at http://149.171.189.1/.

This journal paper is joint work of Hang Guo and  John Heidemann from USC/ISI.

Categories
Data

fighting bit rot in log-term data archives with babarchive

As part of research at ANT we generate a lot of data, and our goal is to keep it safe even in the face of an imperfect world of data storage.

When we say a lot, we mean hundreds of terabytes: As of May 2020, we have releasable 860 datasets making up 134 TB of storage (510TB if we uncompressed it). We provide this data at no cost to researchers, and since 2008 we’ve provided 2049 datasets (338 TB, or 1.1PB if uncompressed!) to 406 researchers!

These datasets range from packet captures of “normal” traffic, to curated captures of DDoS attacks, as well as dozens of research paper-specific datasets, 16 years of Internet censuses and 7 years of Internet outages, plus target lists for IPv4 that are regularly used for traffic studies and tools like Verfploeter anycast mapping.

As part of keeping this data, our goal is to keep this data. We want to fight bit rot and data loss. That means the RAID-6 for primary storage, with monitoring and timely disk replacement. It means off site backup (with a big thanks to our collaborators at Colorado State University, Christos Papadopoulos, Craig Partridge, and Dimitrios Kounalakis for their help). And it means watching bits to make sure they don’t spontaneously change.

One might think that bits at rest stay at rest, but… not always. We’ve seen three times when disks have spontaneously changed a byte over the last 20 years. In 2011 and 2012 I had bit flips on my personal files, and in 2020 we had a byte flip on a packet capture.

How do we know? We have application-level checksums of every file, and every day we take 10 minutes to check at least one dataset against its checksums. (Over time, we cover all datasets and then start all over.)

Our checksumming software is babarchive–our own wrapper around collecting SHA-256 checksums over a directory tree. We encourage other researchers interested in long-term data curation to carry out active content monitoring (in addition to backups and RAID).

A huge thanks to our research sponsors: DHS (through the LANDER, LACREND, and LACANIC projects), NSF (through the MADCAT, MR-Net), and DARPA (through GAWSEED).

Categories
Publications Technical Report

new technical report: IoTSTEED: Bot-side Defense to IoT-based DDoS Attacks (Extended)

We have released a new technical report IoTSTEED: Bot-side Defense to IoT-based DDoS Attacks (Extended) as ISI-TR-738, available at https://www.isi.edu/~hangguo/papers/Guo20a.pdf.

From the abstract:

We show IoTSTEED runs
well on a commodity router: memory usage is small (4% of 512MB) and the router forwards traffic at full uplink rates despite about 50% of CPU usage.

We propose IoTSTEED, a system running in edge routers to defend against Distributed Denial-of-Service (DDoS) attacks launched from compromised Internet-of-Things (IoT) devices. IoTSTEED watches traffic that leaves and enters the home network, detecting IoT devices at home, learning the benign servers they talk to, and filtering their traffic to other servers as a potential DDoS attack. We validate IoTSTEED’s accuracy and false positives (FPs) at detecting devices, learning servers and filtering traffic with replay of 10 days of benign traffic captured from an IoT access network. We show IoTSTEED correctly detects all 14 IoT and 6 non-IoT devices in this network (100% accuracy) and maintains low false-positive rates when learning the servers IoT devices talk to (flagging 2% benign servers as suspicious) and filtering IoT traffic (dropping only 0.45% benign packets). We validate IoTSTEED’s true positives (TPs) and false negatives (FNs) in filtering attack traffic with replay of real-world DDoS traffic. Our experiments show IoTSTEED mitigates all typical attacks, regardless of the attacks’ traffic types, attacking devices and victims; an intelligent adversary can design to avoid detection in a few cases, but at the cost of a weaker attack. Lastly, we deploy IoTSTEED in NAT router of an IoT access network for 10 days, showing reasonable resource usage and verifying our testbed experiments for accuracy and learning in practice.

We share 10-day operational traffic captured from 14 IoT devices we own at https://ant.isi.edu/datasets/iot/ (see IoT_Operation_Traces-20200127) and release source code for IoTSTEED at https://ant.isi.edu/software/iotsteed/index.html.

This technical report is joint work of Hang Guo and John Heidemann from USC/ISI.

Categories
Students

congratulations to Calvin Ardi for his new PhD

I would like to congratulate Dr. Calvin Ardi for defending his PhD in April 2020 and completing his doctoral dissertation “Improving Network Security through Collaborative Sharing” in June 2020.

From the abstract:

Calvin Ardi and John Heidemann (inset), after Calvin filed his PhD dissertation.

As our world continues to become more interconnected through the
Internet, cybersecurity incidents are correspondingly increasing in
number, severity, and complexity. The consequences of these attacks
include data loss, financial damages, and are steadily moving from the
digital to the physical world, impacting everything from public
infrastructure to our own homes. The existing mechanisms in
responding to cybersecurity incidents have three problems: they
promote a security monoculture, are too centralized, and are too slow.


In this thesis, we show that improving one’s network security strongly
benefits from a combination of personalized, local detection, coupled
with the controlled exchange of previously-private network information
with collaborators. We address the problem of a security monoculture
with personalized detection, introducing diversity by tailoring to the
individual’s browsing behavior, for example. We approach the problem
of too much centralization by localizing detection, emphasizing
detection techniques that can be used on the client device or local
network without reliance on external services. We counter slow
mechanisms by coupling controlled sharing of information with
collaborators to reactive techniques, enabling a more efficient
response to security events.


We prove that we can improve network security by demonstrating our
thesis with four studies and their respective research contributions
in malicious activity detection and cybersecurity data sharing. In
our first study, we develop Content Reuse Detection, an approach to
locally discover and detect duplication in large corpora and apply our
approach to improve network security by detecting “bad
neighborhoods” of suspicious activity on the web. Our second study
is AuntieTuna, an anti-phishing browser tool that implements personalized,
local detection of phish with user-personalization and improves
network security by reducing successful web phishing attacks. In our
third study, we develop Retro-Future, a framework for controlled information
exchange that enables organizations to control the risk-benefit
trade-off when sharing their previously-private data. Organizations
use Retro-Future to share data within and across collaborating organizations,
and improve their network security by using the shared data to
increase detection’s effectiveness in finding malicious activity.
Finally, we present AuntieTuna2.0 in our fourth study, extending the proactive
detection of phishing sites in AuntieTuna with data sharing between friends.
Users exchange previously-private information with collaborators to
collectively build a defense, improving their network security and
group’s collective immunity against phishing attacks.

Calvin defended his PhD when USC was on work-from-home due to COVID-19; he is the second ANT student with a fully on-line PhD defense.

Categories
News

the ns family of network simulators are awarded the SIGCOMM Networking Systems Award for 2020

From the SIGCOMM mailing list and Facebook feed, Dina Papagiannaki posted on June 3, 2020:

I hope that everyone in the community is safe. A brief announcement that we have our winner for the networking systems award 2020. The committee comprised of Anja Feldmann (Max-Planck-Institut für Informatik), Srinivasan Keshav (University of Cambridge, chair), and Nick McKeown (Stanford University) have decided to present the award to the ns family of network simulators (ns-1, ns-2, and ns-3). Congratulations to all the contributors!

Description

“ns” is a well-known acronym in networking research, referring to a series of network simulators (ns-1, ns-2, and ns-3) developed over the past twenty five years. ns-1 was developed at Lawrence Berkeley National Laboratory (LBNL) between 1995-97 based on an earlier simulator (REAL, written by S. Keshav). ns-2 was an early open source project, developed in the 1997-2004 timeframe and led by collaborators from USC Information Sciences Institute, LBNL, UC Berkeley, and Xerox PARC. A companion network animator (nam) was also developed during this time [Est00]. Between 2005-08, collaborators from the University of Washington, Inria Sophia Antipolis, Georgia Tech, and INESC TEC significantly rewrote the simulator to create ns-3, which continues today as an active open source project.

All of the ns simulators can be characterized as packet-level, discrete-event network simulators, with which users can build models of computer networks with varying levels of fidelity, in order to conduct performance evaluation studies. The core of all three versions is written in C++, and simulation scripts are written directly in a native programming language: for ns-1, in the Tool Command Language (Tcl), for ns-2, in object-oriented Tcl (OTcl), and for ns-3, in either C++ or Python. ns is a full-stack simulator, with a high degree of abstraction at the physical and application layers, and varying levels of modeling detail between the MAC and transport layers. ns-1 was released with a BSD software license, ns-2 with a collection of licenses later consolidated into a GNU GPLv2-compatible framework, and ns-3 with the GNU GPLv2 license.

ns-3 [Hen08, Ril10] can be viewed as a synthesis of three predecessor tools: yans [Lac06], GTNetS [Ril03], and ns-2 [Bre00]. ns-3 contains extensions to allow distributed execution on parallel processors, real-time scheduling with emulation capabilities for packet exchange with real systems, and a framework to allow C and C++ implementation (application and kernel) code to be compiled for reuse within ns-3 [Taz13]. Although ns-3 can be used as a general-purpose discrete-event simulator, and as a simulator for non-Internet-based networks, by far the most active use centers around Internet-based simulation studies, particularly those using its detailed models of Wi-Fi and 4G LTE systems. The project is now focused on developing models to allow ns-3 to support research and standardization activities involving several aspects of 5G NR, next-generation Wi-Fi, and the IETF Transport Area.

The ns-3-users Google Groups forum has over 9000 members (with several hundred monthly posts), and the developer mailing list contains over 1500 subscribers. Publication counts (as counted annually) in the ACM and IEEE digital libraries, as well as search results in Google Scholar, describing research work using or extending ns-2 and ns-3, continue to increase each year, and usage also appears to be growing within the networking industry and government laboratories. The project’s home page is at https://www.nsnam.org, and software development discussion is conducted on the ns-developers@isi.edu mailing list.

Nominees

The main authors of ns-1 were (in alphabetical order): Kevin Fall, Sally Floyd, Steve McCanne, and Kannan Varadhan.

ns-2 had a larger number of contributors. Space precludes listing all authors, but the following people were leading source code committers to ns-2 (in alphabetical order): Xuan Chen, Kevin Fall, Sally Floyd, Padma Haldar, John Heidemann, Tom Henderson, Polly Huang, K.C. Lan, Steve McCanne, Giao Ngyuen, Venkat Padmanabhan, Yuri Pryadkin, Kannan Varadhan, Ya Xu, and Haobo Yu. A more complete list of ns-2 contributors can be found at: https://www.isi.edu/nsnam/ns/CHANGES.html.

The ns-3 simulator has been developed by over 250 contributors over the past fifteen years. The original main development team consisted of (in alphabetical order): Raj Bhattacharjea, Gustavo Carneiro, Craig Dowell, Tom Henderson, Mathieu Lacage, and George Riley.

Recognition is also due to the long list of ns-3 software maintainers, many of which made significant contributions to ns-3, including (in alphabetical order): John Abraham, Zoraze Ali, Kirill Andreev, Abhijith Anilkumar, Stefano Avallone, Ghada Badawy Nicola Baldo, Peter D. Barnes, Jr., Biljana Bojovic, Pavel Boyko, Junling Bu, Elena Buchatskaya, Daniel Camara, Matthieu Coudron, Yufei Cheng, Ankit Deepak, Sebastien Deronne, Tom Goff, Federico Guerra, Budiarto Herman, Mohamed Amine Ismail, Sam Jansen, Konstantinos Katsaros, Joe Kopena, Alexander Krotov, Flavio Kubota, Daniel Lertpratchya, Faker Moatamri, Vedran Miletic, Marco Miozzo, Hemanth Narra, Natale Patriciello, Tommaso Pecorella, Josh Pelkey, Alina Quereilhac, Getachew Redieteab, Manuel Requena, Matias Richart, Lalith Suresh, Brian Swenson, Cristiano Tapparello, Adrian S.W. Tam, Hajime Tazaki, Frederic Urbani, Mitch Watrous, Florian Westphal, and Dizhi Zhou.

The full list of ns-3 authors is maintained in the AUTHORS file in the top-level source code directory, and full commit attributions can be found in the git commit logs.

References

[Bre00] Lee Breslau et al., Advances in network simulation, IEEE Computer, vol. 33, no. 5, pp. 59-67, May 2000.

[Est00] Deborah Estrin et al., Network Visualization with Nam, the VINT Network Animator, IEEE Computer, vol. 33, no.11, pp. 63-68, November 2000.

[Hen08] Thomas R. Henderson, Mathieu Lacage, and George F. Riley, Network simulations with the ns-3 simulator, In Proceedings of ACM Sigcomm Conference (demo), 2008.

[Lac06] Mathieu Lacage and Thomas R. Henderson. 2006. Yet another network simulator. In Proceeding from the 2006 workshop on ns-2: the IP network simulator (WNS2 ’06). Association for Computing Machinery, New York, NY, USA, 12–es.

[Ril03] George F. Riley, The Georgia Tech Network Simulator, In Proceedings of the ACM SIGCOMM Workshop on Models, Methods and Tools for Reproducible Network Research (MoMeTools) , Aug. 2003.

[Ril10] George F. Riley and Thomas Henderson, The ns-3 Network Simulator. In Modeling and Tools for Network Simulation, SpringerLink, 2010.

[Taz13] Hajime Tazaki et al. Direct code execution: revisiting library OS architecture for reproducible network experiments. In Proceedings of the ninth ACM conference on Emerging networking experiments and technologies (CoNEXT ’13). Association for Computing Machinery, New York, NY, USA, 217–228.

USC/ISI had multiple projects and was very active in ns-2 development for many years, first lead by Deborah Estrin (with the VINT project), then by John Heidemann (with the SAMAN and SCADDS projects), with Tom Henderson took over leadership and evolved it (with others) into ns-3. All of these efforts have been open source collaborations with key players at other institutions as well. Sally Floyd, Steve McCanne, and Kevin Fall were all leaders.

I would particularly like to thank the several USC students who did PhDs on ns-2 related topics: Polly Huang, Kun-Chan Lan, Debojyoti Dutta, and earlier Kannan Varadhan. Ns-2 also benefited from external code contributions from David B. Johnson’s Monarch group (then at CMU) and Elizabeth Belding and Charles Perkins. (My apologies for other contributors I’m sure I’m missing.)


A huge thanks to the ns-1 authors (Kannan Varadhan was a USC student at the time), and a huge thanks to the ns-3 authors for taking over maintainership and evolution and keeping it vibrant.

Categories
Papers Publications

New paper “Bidirectional Anycast/Unicast Probing (BAUP): Optimizing CDN Anycast” at IFIP TMA 2020

We published a new paper “Bidirectional Anycast/Unicast Probing (BAUP): Optimizing CDN Anycast” by Lan Wei (University of Southern California/ ISI), Marcel Flores (Verizon Digital Media Services), Harkeerat Bedi (Verizon Digital Media Services), John Heidemann (University of Southern California/ ISI) at Network Traffic Measurement and Analysis Conference 2020.

From the abstract:

IP anycast is widely used today in Content Delivery Networks (CDNs) and for Domain Name System (DNS) to provide efficient service to clients from multiple physical points-of-presence (PoPs). Anycast depends on BGP routing to map users to PoPs, so anycast efficiency depends on both the CDN operator and the routing policies of other ISPs. Detecting and diagnosing
inefficiency is challenging in this distributed environment. We propose Bidirectional Anycast/Unicast Probing (BAUP), a new approach that detects anycast routing problems by comparing anycast and unicast latencies. BAUP measures latency to help us identify problems experienced by clients, triggering traceroutes to localize the cause and suggest opportunities for improvement. Evaluating BAUP on a large, commercial CDN, we show that problems happens to 1.59% of observers, and we find multiple opportunities to improve service. Prompted by our work, the CDN changed peering policy and was able to significantly reduce latency, cutting median latency in half (40 ms to 16 ms) for regions with more than 100k users.

The data from this paper is publicly available from RIPE Atlas, please see paper reference for measurement IDs.