Categories
Data

fighting bit rot in log-term data archives with babarchive

As part of research at ANT we generate a lot of data, and our goal is to keep it safe even in the face of an imperfect world of data storage.

When we say a lot, we mean hundreds of terabytes: As of May 2020, we have releasable 860 datasets making up 134 TB of storage (510TB if we uncompressed it). We provide this data at no cost to researchers, and since 2008 we’ve provided 2049 datasets (338 TB, or 1.1PB if uncompressed!) to 406 researchers!

These datasets range from packet captures of “normal” traffic, to curated captures of DDoS attacks, as well as dozens of research paper-specific datasets, 16 years of Internet censuses and 7 years of Internet outages, plus target lists for IPv4 that are regularly used for traffic studies and tools like Verfploeter anycast mapping.

As part of keeping this data, our goal is to keep this data. We want to fight bit rot and data loss. That means the RAID-6 for primary storage, with monitoring and timely disk replacement. It means off site backup (with a big thanks to our collaborators at Colorado State University, Christos Papadopoulos, Craig Partridge, and Dimitrios Kounalakis for their help). And it means watching bits to make sure they don’t spontaneously change.

One might think that bits at rest stay at rest, but… not always. We’ve seen three times when disks have spontaneously changed a byte over the last 20 years. In 2011 and 2012 I had bit flips on my personal files, and in 2020 we had a byte flip on a packet capture.

How do we know? We have application-level checksums of every file, and every day we take 10 minutes to check at least one dataset against its checksums. (Over time, we cover all datasets and then start all over.)

Our checksumming software is babarchive–our own wrapper around collecting SHA-256 checksums over a directory tree. We encourage other researchers interested in long-term data curation to carry out active content monitoring (in addition to backups and RAID).

A huge thanks to our research sponsors: DHS (through the LANDER, LACREND, and LACANIC projects), NSF (through the MADCAT, MR-Net), and DARPA (through GAWSEED).

Categories
Students

congratulations to Xue Cai for her new PhD

I would like to congratulate Dr. Xue Cai for defending her PhD and filing her doctoral disseration “Global Analysis and Modeling on Decentralized Internet” in Dec. 2013.

Xue Cai (left) and John Heidemann, after her PhD defense.
Xue Cai (left) and John Heidemann, after her PhD defense.

From the abstract:

Better understanding about Internet infrastructure is crucial to improve the reliability, performance, and security of web services. The need for this understanding then drives research in network measurements. Internet measurements explore a variety of data related to a specific topic and then develop approaches to transform data into useful understanding about the topic. This process is not straightforward since available data often only contains indirect information that may appear to have limited connection to the topic.
This body of work asserts that systematic approaches can overcome data limitations to improve understanding about important aspects of the Internet infrastructure. We demonstrate the validity of our thesis statement by providing three specific examples that develop novel approaches and provide novel understanding compared to prior work. In particular, we employ four systematic approaches—statistical, clustering, modeling, and what-if approach—to understand three important aspects of the Internet: the efficiency and management of IPv4 addresses, the ownership of Autonomous Systems (ASes), and the robustness of web services when facing critical facility disruption. These approaches have addressed a variety of challenges posed by indirect, incomplete, over-fit, noisy and unknown data; they in turn enable us to improve understanding about the Internet.
Each of our three studies explores a different area of the problem space and opens a much larger area of opportunity. The data limitations addressed by our approaches also occur in many other problems. We believe our approaches can inspire future work to solve these problems and in turn provide more useful understanding about the Internet.

Categories
Papers Publications

new conference paper “Low-Rate, Flow-Level Periodicity Detection” at Global Internet 2011

Visualization of low-rate periodicity, before and after installation of a keylogger.  [Bartlett11a, figure 3]
Visualization of low-rate periodicity, before and after installation of a keylogger. [Bartlett11a, figure 3]
The paper “Low-Rate, Flow-Level Periodicity Detection”, by Genevieve Bartlett, John Heidemann, and Christos Papadopoulos is being presented at IEEE Global Internet 2011 in Shanghai, China this week. The full text is available at http://www.isi.edu/~johnh/PAPERS/Bartlett11a.pdf.

The abstract summarizes the work:

As desktops and servers become more complicated, they employ an increasing amount of automatic, non-user initiated communication. Such communication can be good (OS updates, RSS feed readers, and mail polling), bad (keyloggers, spyware, and botnet command-and-control), or ugly (adware or unauthorized peer-to-peer applications). Communication in these applications is often regular, but with very long periods, ranging from minutes to hours. This infrequent communication and the complexity of today’s systems makes these applications difficult for users to detect and diagnose. In this paper we present a new approach to identify low-rate periodic network traffic and changes in such regular communication. We employ signal-processing techniques, using discrete wavelets implemented as a fully decomposed, iterated filter bank. This approach not only detects low-rate periodicities, but also identifies approximate times when traffic changed. We implement a self-surveillance application that externally identifies changes to a user’s machine, such as interruption of periodic software updates, or an installation of a keylogger.

The datasets used in this paper are available on request, and through PREDICT.

An expanded version of the paper is available as a technical report “Using low-rate flow periodicities in anomaly detection” by Bartlett, Heidemann and Papadopoulos. Technical Report ISI-TR-661, USC/Information Sciences Institute, Jul 2009. http://www.isi.edu/~johnh/PAPERS/Bartlett09a.pdf

Categories
Papers Publications

Paper at Global Internet 2010

Chris Wilcox presented a paper titled “Correlating Spam Activity with IP Address Characteristics” In Global Inernet 2010. The paper uses Lander survey data as well as spam data from eSoft.

Abstract: It is well known that spam bots mostly utilize compromised machines with certain address characteristics, such as dynamically allocated addresses, machines in specific geographic areas and IP ranges from AS’ with more tolerant spam policies. Such machines tend to be less diligently administered and may exhibit less stability, more volatility, and shorter uptimes. However, few studies have attempted to quantify how such spambot address characteristics compare with non-spamming hosts.
Quantifying these characteristics may help provide important information for comprehensive spam mitigation.
We use two large datasets, namely a commercial blacklist
and an Internet-wide address visibility study to quantify address characteristics of spam and non-spam networks. We find that spam networks exhibit significantly less availability and uptime, and higher volatility than non-spam networks. In addition, we conduct a collateral damage study of a common practice where an ISP blocks the entire /24 prefix if spammers are detected in that range. We find that such a policy blacklists a significant portion of legitimate mail servers belonging to the same prefix.

Categories
Papers Publications

Paper at NPSec

Steve DiBenedetto presented a paper titled “Fingerprinting Custom Botnet Protocol Stacks” at NPSec 2010, in Kyoto Japan.

Categories
Presentations

New Video About Address Utilization and Allocations on Map Browser

The ANT project released a video describing Internet address allocation and how we study address utilization with IPv4 censuses. Aniruddh Rao prepared this video, working with John Heidemann and Xue Cai.

a scene from the ANT video describing address allocation and census taking

We have also updated our web-based IPv4 address browser to provide information about to what organizations each address block is allocated. The map now visualizes the whois allocation data; we thank the five regional internet registries for sharing this data with us and authorizing this visualization.

organizations in our Internet map

Finally, our web-based IPv4 address browser now has better time travel, with nearly 30 different census from Dec. 2005 to Nov. 2010, and we continue to update the map regularly.

Data collection for this work is through the LANDER project, and the map browser improvements are due to AMITE, both supported by DHS. Video preparation was supported by these projects and NSF through the MADCAT project.

Categories
Papers Publications

New journal paper “Parametric Methods for Anomaly Detection in Aggregate Traffic” to appear in TON

The paper “Parametric Methods for Anomaly Detection in Aggregate Traffic” was accepted for publication in ACM/IEEE Transactions on Networking (available at http://www.isi.edu/~johnh/PAPERS/Thatte10a.html).

From the abstract:

This paper develops parametric methods to detect network anomalies using only aggregate traffic statistics, in contrast to other works requiring flow separation, even when the anomaly is a small fraction of the total traffic. By adopting simple statistical models for anomalous and background traffic in the time-domain, one can estimate model parameters in realtime, thus obviating the need for a long training phase or manual parameter tuning. The proposed bivariate Parametric Detection Mechanism (bPDM) uses a sequential probability ratio test, allowing for control over the false positive rate while examining the trade-off between detection time and the strength of an anomaly. Additionally, it uses both traffic-rate and packet-size statistics, yielding a bivariate model that eliminates most false positives. The method is analyzed using the bitrate SNR metric, which is shown to be an effective metric for anomaly detection. The performance of the bPDM is evaluated in three ways: first, synthetically-generated traffic provides for a controlled comparison of detection time as a function of the anomalous level of traffic. Second, the approach is shown to be able to detect controlled artificial attacks over the USC campus network in varying real traffic mixes. Third, the proposed algorithm achieves rapid detection of real denial-of-service attacks as determined by the replay of previously captured network traces. The method developed in this paper is able to detect all attacks in these scenarios in a few seconds or less.

Citation: Gautam Thatte, Urbashi Mitra, and John Heidemann. Parametric Methods for Anomaly Detection in Aggregate Traffic. ACM/IEEE Transactions on Networking, p. accepted to appear, August, 2010. (Likely publication in 2011). <http://www.isi.edu/~johnh/PAPERS/Thatte10a.html>.
Categories
Papers Publications

new paper “Uses and Challenges for Network Datasets”

We just posted a pre-print of the paper “Uses and Challenges for Network Datasets”, to appear at IEEE CATCH in March.  The pre-print is at <http://www.isi.edu/~johnh/PAPERS/Heidemann09a.html>.

The abstract summarizes the paper:

Network datasets are necessary for many types of network research.  While there has been significant discussion about specific datasets, there has been less about the overall state of network data collection.  The goal of this paper is to explore the research questions facing the Internet today, the datasets needed to answer those questions, and the challenges to using those datasets.  We suggest several practices that have proven important in use of current data sets, and open challenges to improve use of network data.

More specifically, the paper tries to answer the question Jody Westby put to PREDICT PIs, which is “why take data, what is it good for”?  While a simple question, it’s not easy to answer (at least, my attempt to dash of a quick answer in e-mail failed).  The paper is an attempt at a more thoughtful answer.

The paper tries to summarize and point to a lot of ongoing work, but I know that our coverage was insufficient.  We welcome feedback about what we’re missing.

John Heidemann and Christos Papadopoulos. Uses and Challenges for Network Datasets. In Proceedings of the IEEE Cybersecurity Applications and Technologies Conference for Homeland Security (CATCH), pp. 73-82. Washington, DC, USA, IEEE. March, 2009. http://www.isi.edu/~johnh/PAPERS/Heidemann09a.html