Trinocular showed an outage from 8:40am to 1:05pm (mountain time zone), with an smaller initial outage starting at 7am (2023-11-04t15:40 to t20:00 UTC, possibly starting at 14:00 UTC). This outage was quite severe, affecting more than 40% of the local networks that we monitor.
Cable cuts are hard to deal with, and we’re happy that they were able to restore service relatively quickly!
I would like to congratulate Dr. Guillermo Baltra for defending his PhD at the University of Southern California in August 2023 and completing his doctoral dissertation “Improving network reliability using a formal definition of the Internet core”.
From the abstract:
After 50 years, the Internet is still defined as “a collection of interconnected networks”. Yet seamless, universal connectivity is challenged in several ways. Political pressure threatens fragmentation due to de-peering; architectural changes such as carrier-grade NAT, the cloud makes connectivity indirect; firewalls impede connectivity; and operational problems and commercial disputes all challenge the idea of a single set of “interconnected networks”. We propose that a new, conceptual definition of the Internet core helps disambiguate questions in analysis of network reliability and address space usage.
We prove this statement through three studies. First, we improve coverage of outage detection by dealing with sparse sections of the Internet, increasing from a nominal 67% responsive /24 blocks coverage to 96% of the responsive Internet. Second, we provide a new definition of the Internet core, and use it to resolve partial reachability ambiguities. We show that the Internet today has peninsulas of persistent, partial connectivity, and that some outages cause islands where the Internet at the site is up, but partitioned from the main Internet. Finally, we use our definition to identify ISP trends, with applications to policy and improving outage detection accuracy. We show how these studies together thoroughly prove our thesis statement. We provide a new conceptual definition of “the Internet core” in our second study about partial reachability. We use our definition in our first and second studies to disambiguate questions about network reliability and in our third study, to ISP address space usage dynamics.
Guillermo’s PhD work was supported by NSF grants CNS-1806785, CNS-2007106 and NSF-2028279 and DH S&T Cyber Security Division contract 70RSAT18CB0000014 and a DHS contract administred by AFRL as contract FA8750-18-2-0280, to USC Viterbi, the Armada de Chile, and the Agencia Nacional de Investigación y Desarrollo de Chile (ANID).
Please see his individual publications for what data is available from his research; his results are also in use in ongoing Trinocular outage detection datasets.
I would like to congratulate Dr. Basileal Imana for defending his PhD at the University of Southern California in August 2023 and completing his doctoral dissertation “Methods for Auditing Social Media Algorithms in the Public Interest”.
From the abstract:
Social-media platforms are entering a new era of increasing scrutiny by public interest groups and regulators. One reason for the increased scrutiny is platform-induced bias in how they deliver ads for life opportunities. Certain ad domains are legally protected against discrimination, and even when not, some domains have societal interest in equitable ad delivery. Platforms use relevance-estimator algorithms to optimize the delivery of ads. Such algorithms are proprietary and therefore opaque to outside evaluation, and early evidence suggests these algorithms may be biased or discriminatory. In response to such risks, the U.S. and the E.U. have proposed policies to allow researchers to audit platforms while protecting users’ privacy and platforms’ proprietary information. Currently, no technical solution exists for implementing such audits with rigorous privacy protections and without putting significant constraints on researchers. In this work, our thesis is that relevance-estimator algorithms bias the delivery of opportunity ads, but new auditing methods can detect that bias while preserving privacy.
We support our thesis statement through three studies. In the first study, we propose a black-box method for measuring gender bias in the delivery of job ads with a novel control for differences in job qualification, as well as other confounding factors that influence ad delivery. Controlling for qualification is necessary since qualification is a legally acceptable factor to target ads with, and we must separate it from bias introduced by platforms’ algorithms. We apply our method to Meta and LinkedIn, and demonstrate that Meta’s relevance estimators result in discriminatory delivery of job ads by gender. In our second study, we design a black-box methodology that is the first to propose a means to draw out potential racial bias in the delivery of education ads. Our method employs a pair of ads that are seemingly identical education opportunities but one is of inferior quality tied with a historical societal disparity that ad delivery algorithms may propagate. We apply our method to Meta and demonstrate their relevance estimators racially bias the delivery of education ads. In addition, we observe that the lack of access to demographic attributes is a growing challenge for auditing bias in ad delivery. Motivated by this challenge, we make progress towards enabling use of inferred race in black-box audits by analyzing how inference error can lead to incorrect measurement of skew in ad delivery. Going beyond the domain-specific and black-box methods we used in our first two studies, our final study proposes a novel platform-supported framework to allow researchers to audit relevance estimators that is generalizable to studying various categories of ads, demographic attributes and target platforms. The framework allows auditors to get privileged query-access to platforms’ relevance estimators to audit for bias in the algorithms while preserving the privacy interests of users and platforms. Overall, our first two studies show relevance-estimator algorithms bias the delivery of job and education ads, and thus motivate making these algorithms the target of platform-supported auditing in our third study. Our work demonstrates a platform-supported means to audit these algorithms is the key to increasing public oversight over ad platforms while rigorously protecting privacy.
Basi’s PhD work was co-advised by Aleksandra Korolova and John Heidemann, and supported by grants from the Rose Foundation and the NSF (CNS-1755992, CNS-1916153, CNS-1943584, CNS-1956435, and CNS-1925737.) Please see his individual publications for what data is available from his research.
Sandeep Muthu completed his summer undergraduate research internship at ISI this summer, working with John Heidemann and Yuri Pradkin on his project “Determining the Risks of Tunnels Over the Internet”.
In his project, Sandeep examined how unauthenticated tunneling protocols can be infiltrated, and how often they are used in the Internet. He demonstrated that tunnels can be exploited in the DETER testbed, and showed that there are many tunnels in general use based on analysis of anonymized IXP data.
Sandeep’s work was part of the ISI Research Experiences for Undergraduates program at USC/ISI. We thank Jelena Mirkovic (PI) for coordinating another year of this great program, and NSF for support through award #2051101. We also thank the University of Memphis (Christos Papadopoulos) and FIU
Network traffic is often diurnal, with some networks peaking during the workday and many homes during evening streaming hours. Monitoring systems consider diurnal trends for capacity planning and anomaly detection. In this paper, we reverse this inference and use diurnal network trends and their absence to infer human activity. We draw on existing and new ICMP echo-request scans of more than 5.2M /24 IPv4 networks to identify diurnal trends in IP address responsiveness. Some of these networks are change-sensitive, with diurnal patterns correlating with human activity. We develop algorithms to clean this data, extract underlying trends from diurnal and weekly fluctuation, and detect changes in that activity. Although firewalls hide many networks, and Network Address Translation often hides human trends, we show about 168k to 330k (3.3–6.4% of the 5.2M) /24 IPv4 networks are change-sensitive. These blocks are spread globally, representing some of the most active 60% of 2 × 2◦ geographic gridcells, regions that include 98.5% of ping-responsive blocks. Finally, we detect interesting changes in human activity. Reusing existing data allows our new algorithm to identify changes, such as Work-from-Home due to the global reaction to the emergence of Covid-19 in 2020. We also see other changes in human activity, such as national holidays and government-mandated curfews. This ability to detect trends in human activity from the Internet data provides a new ability to understand our world, complementing other sources of public information such as news reports and wastewater virus observation.
This paper is a joint work of Xiao Song from USC, Guillermo Baltra from USC, and John Heidemann from USC/ISI. Datasets from this paper can be found at https://ant.isi.edu/datasets/ip_accumulation. This work was supported by NSF (MINCEQ, NSF 2028279; EIEIO CNS-2007106; and InternetMap (CSN-2212480).
Concerns of potential harmful outcomes have prompted proposal of legislation in both the U.S. and the E.U. to mandate a new form of auditing where vetted external researchers get privileged access to social media platforms. Unfortunately, to date there have been no concrete technical proposals to provide such auditing, because auditing at scale risks disclosure of users’ private data and platforms’ proprietary algorithms. We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation. The first contribution of our work is to enumerate the challenges and the limitations of existing auditing methods to implement these policies at scale. Second, we suggest that limited, privileged access to relevance estimators is the key to enabling generalizable platform-supported auditing of social media platforms by external researchers. Third, we show platform-supported auditing need not risk user privacy nor disclosure of platforms’ business interests by proposing an auditing framework that protects against these risks. For a particular fairness metric, we show that ensuring privacy imposes only a small constant factor increase (6.34x as an upper bound, and 4x for typical parameters) in the number of samples required for accurate auditing. Our technical contributions, combined with ongoing legal and policy efforts, can enable public oversight into how social media platforms affect individuals and society by moving past the privacy-vs-transparency hurdle.
A 2-minute video overview of the work can be found here.
This paper is a joint work of Basileal Imana from USC, Aleksandra Korolova from Princeton University, and John Heidemann from USC/ISI.
These outages were nation-wide, apparently affecting most of Italy. However, it looks like they “only” affected 20-30% of networks, and not all Italian ISPs. We’re happy they were able to recover so quickly.
This event shows the importance of global network monitoring.
Our paper titled “Defending Root DNS Servers Against DDoS Using Layered Defenses” will appear at COMSNETS 2023 in January 2023. In this work, by ASM Rizvi, Jelena Mirkovic, John Heidemann, Wes Hardaker, and Robert Story, we design an automated system named DDIDD with multiple filters to handle an ongoing DDoS attack on a DNS root server. We evaluated ten real-world attack events on B-root and showed DDIDD could successfully mitigate these attack events. We released the datasets for these attack events on our dataset webpage (dataset names starting with B_Root_Anomaly).
Update in January: we are happy to announce that this paper was awarded Best Paper for COMSNETS 2023! Thanks for the recognition.
From the abstract:
Distributed Denial-of-Service (DDoS) attacks exhaust resources, leaving a server unavailable to legitimate clients. The Domain Name System (DNS) is a frequent target of DDoS attacks. Since DNS is a critical infrastructure service, protecting it from DoS is imperative. Many prior approaches have focused on specific filters or anti-spoofing techniques to protect generic services. DNS root nameservers are more challenging to protect, since they use fixed IP addresses, serve very diverse clients and requests, receive predominantly UDP traffic that can be spoofed, and must guarantee high quality of service. In this paper we propose a layered DDoS defense for DNS root nameservers. Our defense uses a library of defensive filters, which can be optimized for different attack types, with different levels of selectivity. We further propose a method that automatically and continuously evaluates and selects the best combination of filters throughout the attack. We show that this layered defense approach provides exceptional protection against all attack types using traces of real attacks from a DNS root nameserver. Our automated system can select the best defense within seconds and quickly reduce the traffic to the server within a manageable range while keeping collateral damage lower than 2%. We can handle millions of filtering rules without noticeable operational overhead.
This work is partially supported by the National Science Foundation (grant NSF OAC-1739034) and DHS HSARPA Cyber Security Division (grant SHQDC-17-R-B0004-TTA.02- 0006-I), in collaboration with NWO.
The Domain Name System (DNS) is an essential service for the Internet which maps host names to IP addresses. The DNS Root Sever System operates the top of this namespace. RIPE Atlas observes DNS from more than 11k vantage points (VPs) around the world, reporting the reliability of the DNS Root Server System in DNSmon. DNSmon shows that loss rates for queries to the DNS Root are nearly 10% for IPv6, much higher than the approximately 2% loss seen for IPv4. Although IPv6 is “new,” as an operational protocol available to a third of Internet users, it ought to be just as reliable as IPv4. We examine this difference at a finer granularity by investigating loss at individual VPs. We confirm that specific VPs are the source of this difference and identify two root causes: VP islands with routing problems at the edge which leave them unable to access IPv6 outside their LAN, and VP peninsulas which indicate routing problems in the core of the network. These problems account for most of the loss and nearly all of the difference between IPv4 and IPv6 query loss rates. Islands account for most of the loss (half of IPv4 failures and 5/6ths of IPv6 failures), and we suggest these measurement devices should be filtered out to get a more accurate picture of loss rates. Peninsulas account for the main differences between root identifiers, suggesting routing disagreements root operators need to address. We believe that filtering out both of these known problems provides a better measure of underlying network anomalies and loss and will result in more actionable alerts.
This work was done while Tarang was on his Summer 2022 undergraduate research internship at USC/ISI, with support from NSF grant 2051101 (PI: Jelena Mirkovich). John Heidemann and Yuri Pradkin’s work is supported by NSF through the EIEIO project (CNS-2007106). We thank Guillermo Baltra for his work on islands and peninsulas, as seen in his arXiv report.