Categories
Announcements Projects

new project LACANIC

We are happy to announce a new project, LACANIC, the Los Angeles/Colorado Application and Network Information Community.

The LACANIC project’s goal is to develop datasets to improve Internet security and readability. We distribute these datasets through the DHS IMPACT program.

As part of this work we:

  • provide regular data collection to collect long-term, longitudinal data
  • curate datasets for special events
  • build websites and portals to help make data accessible to casual users
  • develop new measurement approaches

We provide several types of datasets:

  • anonymized packet headers and network flow data, often to document events like distributed denial-of-service (DDoS) attacks and regular traffic
  • Internet censuses and surveys for IPv4 to document address usage
  • Internet hitlists and histories, derived from IPv4 censuses, to support other topology studies
  • application data, like DNS and Internet-of-Things mapping, to document regular traffic and DDoS events
  • and we are developing other datasets

LACANIC allows us to continue some of the data collection we were doing as part of the LACREND project, as well as develop new methods and ways of sharing the data.

LACANIC is a joint effort of the ANT Lab involving USC/ISI (PI: John Heidemann) and Colorado State University (PI: Christos Papadopoulos).

We thank DHS’s Cyber Security Division for their continued support!

 

Categories
Software releases

mtracecap: New utility for multi-point capture released

mtracecap v0.1 (beta) has been released (available at https://ant.isi.edu/software/mtracecap/index.html)

This tool is designed to capture packets from multiple sources and write its output to a single file.  Its build requires a local install of libtrace library (version 4.0 or older) and supports all sources supported by the library, such as pcap based interfaces, linux-specific ring interfaces, pcap and erf outputs and many more!  See them all listed when you run mtracecap with -H option.  DAG device capture is optional, depending on local DAG libraries being present.

An important feature of this tool is being able to roll output into multiple files either based on either maximum file size (e.g.  “-S 100” option will make it write output in 100MB chunks), or system time (e.g. “-G 180” option will rotate output every 180 seconds).

Finally, the tool can use external commands to work on the input before writing it to a file using a pipe (see –pipeout option).  This can be useful if you want to compute some statistics on the fly or compress output using an external compressor.  Using this option will eliminate extra disk read-write operations if all you want to do is to compress the output.

Categories
Papers Publications

new workshop paper “Privacy Principles for Sharing Cyber Security Data” in IWPE 15

The paper “Privacy Principles for Sharing Cyber Security Data” (available at https://www.isi.edu/~calvin/papers/Fisk15a.pdf) will appear at the International Workshop on Privacy Engineering (co-located with IEEE Symposium on Security and Privacy) on May 21, 2015 in San Jose, California.

From the abstract:

Sharing cyber security data across organizational boundaries brings both privacy risks in the exposure of personal information and data, and organizational risk in disclosing internal information. These risks occur as information leaks in network traffic or logs, and also in queries made across organizations. They are also complicated by the trade-offs in privacy preservation and utility present in anonymization to manage disclosure. In this paper, we define three principles that guide sharing security information across organizations: Least Disclosure, Qualitative Evaluation, and Forward Progress. We then discuss engineering approaches that apply these principles to a distributed security system. Application of these principles can reduce the risk of data exposure and help manage trust requirements for data sharing, helping to meet our goal of balancing privacy, organizational risk, and the ability to better respond to security with shared information.

The work in the paper is by Gina Fisk (LANL), Calvin Ardi (USC/ISI), Neale Pickett (LANL), John Heidemann (USC/ISI), Mike Fisk (LANL), and Christos Papadopoulos (Colorado State). This work is supported by DHS S&T, Cyber Security division.

Categories
Papers Publications

new conference paper “BotTalker: Generating Encrypted, Customizable C&C Traces” in HST 2015

The paper “BotTalker: Generating Encrypted, Customizable C&C Traces” will appear at the 14th annual IEEE Symposium on Technologies for Homeland Security (HST ’15) in April 2015 (available at http://www.cs.colostate.edu/~zhang/papers/BotTalker.pdf)

From the abstract:

Encrypted botnets have seen an increasingalerts-types-breakdown-originaluse  in recent years. To enable research in detecting encrypted botnets researchers need samples of encrypted botnet traces with ground truth, which are very hard to get. Traces that are available are not customizable, which prevents testing under various controlled scenarios. To address this problem we introduce BotTalker, a tool that can be used to generate customized encrypted botnet communication traffic. BotTalker emulates the actions a bot would take to encrypt communication. It includes a highly configurable encrypted-traffic converter along with real, non- encrypted bot traces and background traffic. The converter is able to convert non-encrypted botnet traces into encrypted ones by providing customization along three dimensions: (a) selection of real encryption algorithm, (b) flow or packet level conversion, SSL emulation and (c) IP address substitution. To the best of our knowledge, BotTalk is the first work that provides users customized encrypted botnet traffic. In the paper we also apply BotTalker to evaluate the damage result from encrypted botnet traffic on a widely used botnet detection system – BotHunter and two IDS’ – Snort and Suricata. The results show that encrypted botnet traffic foils bot detection in these systems.

This work is advised by Christos Papadopoulos and supported by LACREND.

Categories
Papers Publications

new conference paper “Replay of Malicious Traffic in Network Testbeds” in IEEE Conf. on Technologies for Homeland Security (HST)

The paper “Replay of Malicious Traffic in Network Testbeds” (by Alefiya Hussain, Yuri Pradkin, and John Heidemann) will appear in the 3th IEEE Conference on Technologies for Homeland Security (HST) in Waltham, Mass. in Nov. 2013.  The paper is available at  http://www.isi.edu/~johnh/PAPERS/Hussain13a.

Hussain13a_iconFrom the paper’s abstract:

In this paper we present tools and methods to integrate attack measurements from the Internet with controlled experimentation on a network testbed. We show that this approach provides greater fidelity than synthetic models. We compare the statistical properties of real-world attacks with synthetically generated constant bit rate attacks on the testbed. Our results indicate that trace replay provides fine time-scale details that may be absent in constant bit rate attacks. Additionally, we demonstrate the effectiveness of our approach to study new and emerging attacks. We replay an Internet attack captured by the LANDER system on the DETERLab testbed within two hours.

Data from the paper is available as DoS_DNS_amplification-20130617 from the authors or http://www.predict.org, and the tools are at deterlab).

Categories
Papers Publications

new conference paper “On the Characteristics and Reasons of Long-lived Internet Flows” at IMC

The paper “On the Characteristics and Reasons of Long-lived Internet Flows” was accepted by IMC’10 in Melbourne, Australia (available at http://www.isi.edu/~johnh/PAPERS/Quan10a.html).

From the abstract:

Prior studies of Internet traffic have considered traffic at different resolutions and time scales: packets and flows for hours or days, aggregate packet statistics for days or weeks, and hourly trends for months. However, little is known about the long-term behavior of individual flows. In this paper, we study individual flows (as defined by the 5-tuple of protocol, source and destination IP address and port) over days and weeks. While the vast majority of flows are short, and most bytes are in short flows, we find that about 20% of the overall bytes are carried in flows that last longer than 10 minutes, and flows lasting 100 minutes or longer make up 2% of traffic. We show that long-lived flows are qualitatively different from short flows: they are generally slower, less bursty, and are due to different applications and protocols. We investigate the causes of short- and long-lived flows, and show that the traffic mix varies significantly depending on duration time scale, with computer-to-computer traffic more and more dominating in larger time scales.

Citation: Lin Quan and John Heidemann. On the Characteristics and Reasons of Long-lived Internet Flows. In Proceedings of the ACM Internet Measurement Conference. Melbourne, Australia, ACM. November, 2010. <http://www.isi.edu/~johnh/PAPERS/Quan10a.html>.


Categories
Papers Publications

New conference paper “Improved Internet Traffic Analysis via Optimized Sampling”

The paper “Improved Internet Traffic Analysis via Optimized Sampling” (available at PDF Format) was accepted to ICASSP 2010. The focus of this paper is on the best down-sampling methods to use when measuring internet traffic in order preserve signal information for traffic analysis techniques such as anomaly detection.

From the abstract:

Applications to evaluate Internet quality-of-service and increase network security are essential to maintaining reliability and high performance in computer networks. These applications typically use very accurate, but high cost, hardware measurement systems. Alternate, less expensive software based systems are often impractical for use with analysis applications because they reduce the number and accuracy of measurements using a technique called interrupt coalescence, which can be viewed as a form of sampling. The goal of this paper is to optimize the way interrupt coalescence groups packets into measurements
so as to retain as much of the packet timing information as possible. Our optimized solution produces estimates of timing distributions much closer to those obtained using hardware based systems.
Further we show that for a real Internet analysis application, periodic signal detection, using measurements generated with our method improved detection times by at least 36%.

Citation: Sean McPherson and Antonio Ortega.  Improved Internet Traffic Analysis via Optimized Sampling.  In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, p. to appear.  Dallas, TX, USA, IEEE.  March, 2010.

Categories
Publications Technical Report

New tech report “Analysis of Internet Measurement Systems for Optimized Anomaly Detection System Design”

A new tech report has been posted to the Arxiv database at http://arxiv.org/abs/0907.5233. This paper shows the effect of a software based measurement system on the timing of the measurements obtained. Additionally this paper develops a period signal detection method specific to software based measurement.

Although there exist very accurate hardware systems for measuring traffic on the internet, their widespread use for analysis tasks is limited by their high cost. On the other hand, less expensive, software-based systems exist that are widely available and can be used to perform a number of simple analysis tasks. The caveat with using such software systems is that application of standard analysis methods cannot proceed blindly because inherent distortions exist in the measurements obtained from software systems. The goal of this paper is to analyze common Internet measurement systems to discover the effect of these distortions on common analysis tasks. Then by selecting one specific task, periodic signal detection, a more in-depth analysis is conducted which derives a signal representation to capture the salient features of the measurement and develops a periodic detection mechanism designed for the measurement system which outperforms an existing detection method not optimized for the measurement system. Finally, through experiments the importance of understanding the relationship between the input traffic, measurement system configuration and detection method performance is emphasized.

Citation: Sean McPherson and Antonio Ortega. Analysis of Internet Measurement Systems for Optimized Anomaly Detection System Design. Technical Report N. arXiv:0907.5233v1, University of Southern California, Department of Electrical Engineering, July, 2009. http://arxiv.org/abs/0907.5233.