Categories
Publications Technical Report

new technical report “Peek Inside the Closed World: Evaluating Autoencoder-Based Detection of DDoS to Cloud ”

We have released a new technical report “Peek Inside the Closed World: Evaluating Autoencoder-Based Detection of DDoS to Cloud” as an ArXiv technical report 1912.05590, available at https://www.isi.edu/~hangguo/papers/Guo19a.pdf

We study 4 cloud IPs (SR1VP1 to 3 and SR2VP1) that are under attack. SR1VP3 sees a large number of mostly short DDoS events (71% of its 49 events being 1 second or less). SR1VP1 and SR1VP2 see smaller numbers of longer DDoS events (median duration for their 20 and 27 events are 121 and 140 seconds). SR2VP1 sees DDoS events of broad range of durations (from 1 second to more than 14 hours).

From the abstract of our technical report:

From the abstract:

Machine-learning-based anomaly detection (ML-based AD) has been successful at detecting DDoS events in the lab. However published evaluations of ML-based AD have only had limited data and have not provided insight into why it works. To address limited evaluation against real-world data, we apply autoencoder, an existing ML-AD model, to 57 DDoS attack events captured at 5 cloud IPs from a major cloud provider. To improve our understanding for why ML-based AD works or not works, we interpret this data with feature attribution and counterfactual explanation. We show that our version of autoencoders work well overall: our models capture nearly all malicious flows to 2 of the 4 cloud IPs under attacks (at least 99.99%) but generate a few false negatives (5% and 9%) for the remaining 2 IPs. We show that our models maintain near-zero false positives on benign flows to all 5 IPs. Our interpretation of results shows that our models identify almost all malicious flows with non-whitelisted (non-WL) destination ports (99.92%) by learning the full list of benign destination ports from training data (the normality). Interpretation shows that although our models learn incomplete normality for protocols and source ports, they still identify most malicious flows with non-WL protocols and blacklisted (BL) source ports (100.0% and 97.5%) but risk false positives. Interpretation also shows that our models only detect a few malicious flows with BL packet sizes (8.5%) by incorrectly inferring these BL sizes as normal based on incomplete normality learned. We find our models still detect a quarter of flows (24.7%) with abnormal payload contents even when they do not see payload by combining anomalies from multiple flow features. Lastly, we summarize the implications of what we learn on applying autoencoder-based AD in production.problme?Machine-learning-based anomaly detection (ML-based AD) has been successful at detecting DDoS events in the lab. However published evaluations of ML-based AD have only had limited data and have not provided insight into why it works. To address limited evaluation against real-world data, we apply autoencoder, an existing ML-AD model, to 57 DDoS attack events captured at 5 cloud IPs from a major cloud provider. To improve our understanding for why ML-based AD works or not works, we interpret this data with feature attribution and counterfactual explanation. We show that our version of autoencoders work well overall: our models capture nearly all malicious flows to 2 of the 4 cloud IPs under attacks (at least 99.99%) but generate a few false negatives (5% and 9%) for the remaining 2 IPs. We show that our models maintain near-zero false positives on benign flows to all 5 IPs. Our interpretation of results shows that our models identify almost all malicious flows with non-whitelisted (non-WL) destination ports (99.92%) by learning the full list of benign destination ports from training data (the normality). Interpretation shows that although our models learn incomplete normality for protocols and source ports, they still identify most malicious flows with non-WL protocols and blacklisted (BL) source ports (100.0% and 97.5%) but risk false positives. Interpretation also shows that our models only detect a few malicious flows with BL packet sizes (8.5%) by incorrectly inferring these BL sizes as normal based on incomplete normality learned. We find our models still detect a quarter of flows (24.7%) with abnormal payload contents even when they do not see payload by combining anomalies from multiple flow features. Lastly, we summarize the implications of what we learn on applying autoencoder-based AD in production.

This technical report is joint work of Hang Guo and John Heidemann from USC/ISI and Xun Fan, Anh Cao and Geoff Outhred from Microsoft

Categories
Papers Publications

new conference paper “Detecting Malicious Activity with DNS Backscatter”

The paper “Detecting Malicious Activity with DNS Backscatter” will appear at the ACM Internet Measurements Conference in October 2015 in Tokyo, Japan.  A copy is available at http://www.isi.edu/~johnh/PAPERS/Fukuda15a.pdf).

How newtork activity generates DNS backscatter that is visible at authority servers. (Figure 1 from [Fukuda15a]).
How newtork activity generates DNS backscatter that is visible at authority servers. (Figure 1 from [Fukuda15a]).
From the abstract:

Network-wide activity is when one computer (the originator) touches many others (the targets). Motives for activity may be benign (mailing lists, CDNs, and research scanning), malicious (spammers and scanners for security vulnerabilities), or perhaps indeterminate (ad trackers). Knowledge of malicious activity may help anticipate attacks, and understanding benign activity may set a baseline or characterize growth. This paper identifies DNS backscatter as a new source of information about network-wide activity. Backscatter is the reverse DNS queries caused when targets or middleboxes automatically look up the domain name of the originator. Queries are visible to the authoritative DNS servers that handle reverse DNS. While the fraction of backscatter they see depends on the server’s location in the DNS hierarchy, we show that activity that touches many targets appear even in sampled observations. We use information about the queriers to classify originator activity using machine-learning. Our algorithm has reasonable precision (70-80%) as shown by data from three different organizations operating DNS servers at the root or country-level. Using this technique we examine nine months of activity from one authority to identify trends in scanning, identifying bursts corresponding to Heartbleed and broad and continuous scanning of ssh.

The work in this paper is by Kensuke Fukuda (NII/Sokendai) and John Heidemann (USC/ISI) and was begun when Fukuda-san was a visiting scholar at USC/ISI.  Kensuke Fukuda’s work in this paper is partially funded by Young Researcher Overseas Visit Program by Sokendai, JSPS Kakenhi, and the Strategic International Collaborative R&D Promotion Project of the Ministry of Internal Affairs and Communication in Japan, and by the European Union Seventh Framework Programme.  John Heidemann’s work is partially supported by US DHS S&T, Cyber Security division.

Some of the datasets in this paper are available to researchers, either from the authors or through DNS-OARC.  We list DNS backscatter datasets and methods to obtain them at https://ant.isi.edu/datasets/dns_backscatter/index.html.

 

Categories
Papers Publications

new conference paper “Low-Rate, Flow-Level Periodicity Detection” at Global Internet 2011

Visualization of low-rate periodicity, before and after installation of a keylogger.  [Bartlett11a, figure 3]
Visualization of low-rate periodicity, before and after installation of a keylogger. [Bartlett11a, figure 3]
The paper “Low-Rate, Flow-Level Periodicity Detection”, by Genevieve Bartlett, John Heidemann, and Christos Papadopoulos is being presented at IEEE Global Internet 2011 in Shanghai, China this week. The full text is available at http://www.isi.edu/~johnh/PAPERS/Bartlett11a.pdf.

The abstract summarizes the work:

As desktops and servers become more complicated, they employ an increasing amount of automatic, non-user initiated communication. Such communication can be good (OS updates, RSS feed readers, and mail polling), bad (keyloggers, spyware, and botnet command-and-control), or ugly (adware or unauthorized peer-to-peer applications). Communication in these applications is often regular, but with very long periods, ranging from minutes to hours. This infrequent communication and the complexity of today’s systems makes these applications difficult for users to detect and diagnose. In this paper we present a new approach to identify low-rate periodic network traffic and changes in such regular communication. We employ signal-processing techniques, using discrete wavelets implemented as a fully decomposed, iterated filter bank. This approach not only detects low-rate periodicities, but also identifies approximate times when traffic changed. We implement a self-surveillance application that externally identifies changes to a user’s machine, such as interruption of periodic software updates, or an installation of a keylogger.

The datasets used in this paper are available on request, and through PREDICT.

An expanded version of the paper is available as a technical report “Using low-rate flow periodicities in anomaly detection” by Bartlett, Heidemann and Papadopoulos. Technical Report ISI-TR-661, USC/Information Sciences Institute, Jul 2009. http://www.isi.edu/~johnh/PAPERS/Bartlett09a.pdf

Categories
Papers Publications

New journal paper “Parametric Methods for Anomaly Detection in Aggregate Traffic” to appear in TON

The paper “Parametric Methods for Anomaly Detection in Aggregate Traffic” was accepted for publication in ACM/IEEE Transactions on Networking (available at http://www.isi.edu/~johnh/PAPERS/Thatte10a.html).

From the abstract:

This paper develops parametric methods to detect network anomalies using only aggregate traffic statistics, in contrast to other works requiring flow separation, even when the anomaly is a small fraction of the total traffic. By adopting simple statistical models for anomalous and background traffic in the time-domain, one can estimate model parameters in realtime, thus obviating the need for a long training phase or manual parameter tuning. The proposed bivariate Parametric Detection Mechanism (bPDM) uses a sequential probability ratio test, allowing for control over the false positive rate while examining the trade-off between detection time and the strength of an anomaly. Additionally, it uses both traffic-rate and packet-size statistics, yielding a bivariate model that eliminates most false positives. The method is analyzed using the bitrate SNR metric, which is shown to be an effective metric for anomaly detection. The performance of the bPDM is evaluated in three ways: first, synthetically-generated traffic provides for a controlled comparison of detection time as a function of the anomalous level of traffic. Second, the approach is shown to be able to detect controlled artificial attacks over the USC campus network in varying real traffic mixes. Third, the proposed algorithm achieves rapid detection of real denial-of-service attacks as determined by the replay of previously captured network traces. The method developed in this paper is able to detect all attacks in these scenarios in a few seconds or less.

Citation: Gautam Thatte, Urbashi Mitra, and John Heidemann. Parametric Methods for Anomaly Detection in Aggregate Traffic. ACM/IEEE Transactions on Networking, p. accepted to appear, August, 2010. (Likely publication in 2011). <http://www.isi.edu/~johnh/PAPERS/Thatte10a.html>.
Categories
Papers Publications

New conference paper “Improved Internet Traffic Analysis via Optimized Sampling”

The paper “Improved Internet Traffic Analysis via Optimized Sampling” (available at PDF Format) was accepted to ICASSP 2010. The focus of this paper is on the best down-sampling methods to use when measuring internet traffic in order preserve signal information for traffic analysis techniques such as anomaly detection.

From the abstract:

Applications to evaluate Internet quality-of-service and increase network security are essential to maintaining reliability and high performance in computer networks. These applications typically use very accurate, but high cost, hardware measurement systems. Alternate, less expensive software based systems are often impractical for use with analysis applications because they reduce the number and accuracy of measurements using a technique called interrupt coalescence, which can be viewed as a form of sampling. The goal of this paper is to optimize the way interrupt coalescence groups packets into measurements
so as to retain as much of the packet timing information as possible. Our optimized solution produces estimates of timing distributions much closer to those obtained using hardware based systems.
Further we show that for a real Internet analysis application, periodic signal detection, using measurements generated with our method improved detection times by at least 36%.

Citation: Sean McPherson and Antonio Ortega.  Improved Internet Traffic Analysis via Optimized Sampling.  In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, p. to appear.  Dallas, TX, USA, IEEE.  March, 2010.

Categories
Publications Technical Report

New tech report “Analysis of Internet Measurement Systems for Optimized Anomaly Detection System Design”

A new tech report has been posted to the Arxiv database at http://arxiv.org/abs/0907.5233. This paper shows the effect of a software based measurement system on the timing of the measurements obtained. Additionally this paper develops a period signal detection method specific to software based measurement.

Although there exist very accurate hardware systems for measuring traffic on the internet, their widespread use for analysis tasks is limited by their high cost. On the other hand, less expensive, software-based systems exist that are widely available and can be used to perform a number of simple analysis tasks. The caveat with using such software systems is that application of standard analysis methods cannot proceed blindly because inherent distortions exist in the measurements obtained from software systems. The goal of this paper is to analyze common Internet measurement systems to discover the effect of these distortions on common analysis tasks. Then by selecting one specific task, periodic signal detection, a more in-depth analysis is conducted which derives a signal representation to capture the salient features of the measurement and develops a periodic detection mechanism designed for the measurement system which outperforms an existing detection method not optimized for the measurement system. Finally, through experiments the importance of understanding the relationship between the input traffic, measurement system configuration and detection method performance is emphasized.

Citation: Sean McPherson and Antonio Ortega. Analysis of Internet Measurement Systems for Optimized Anomaly Detection System Design. Technical Report N. arXiv:0907.5233v1, University of Southern California, Department of Electrical Engineering, July, 2009. http://arxiv.org/abs/0907.5233.

Categories
Publications Technical Report

new tech report “Parametric Methods for Anomaly Detection in Aggregate Traffic”

We just posted a tech report “Parametric Methods for Anomaly Detection in Aggregate Traffic” at <ftp://ftp.isi.edu/isi-pubs/tr-663.pdf>. This paper represents quite a bit of work looking at how to apply parametric detection as part of the NSF-sponsored MADCAT project.

From the abstract:

This paper develops parametric methods to detect network anomalies using only aggregate traffic statistics in contrast to other works requiring flow separation, even when the anomaly is a small fraction of the total traffic.  By adopting simple statistical models for anomalous and background traffic in the time-domain, one can estimate model parameters in real-time, thus obviating the need for a long training phase or manual parameter tuning.  The detection mechanism uses a sequential probability ratio test, allowing for control over the false positive rate while examining the trade-off between detection time and the strength of an anomaly.  Additionally, it uses both traffic-rate and packet-size statistics, yielding a bivariate model that eliminates most false positives.  The method is analyzed using the bitrate SNR metric, which is shown to be an effective metric for anomaly detection.  The performance of the bPDM is evaluated in three ways:  first, synthetically generated traffic provides for a controlled comparison of detection time as a function of the anomalous level of traffic.  Second, the approach is shown to be able to detect controlled artificial attacks over the USC campus network in varying real traffic mixes.  Third, the proposed algorithm achieves rapid detection of real denial-of-service attacks as determined by the replay of previously captured network traces.  The method developed in this paper is able to detect all attacks in these scenarios in a few seconds or less.

Citation: Gautam Thatte, Urbashi Mitra, and John Heidemann. Parametric Methods for Anomaly Detection in Aggregate Traffic. Technical Report N. ISI-TR-2009-663, USC/Information Sciences Institute, August, 2009. http://www.isi.edu/~johnh/PAPERS/Thatte09a.html.