Total: 56

Seulbae Kim, Heejo Lee, "Software systems at risk: an empirical study of cloned vulnerabilities in practice", Computers & Security, Vol. 77, pp. 720-736, Aug. 1. 2018.
With the growth of open source software (OSS), code clones - code fragments that are copied and pasted within or between software systems - are proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find “vulnerable” clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a vulnerability-preserving abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. An implementation of VUDDY has been serviced online for free at IoTcube, an automated vulnerability detection platform. In this study, we describe its principles, evaluate its efficacy, and analyze the vulnerabilities VUDDY detected in various real-world software systems, such as Apache HTTPD server and an Android smartphone.
Munkhbayar Bat-Erdene, Hyundo Park, Hongzhe Li, Heejo Lee, Mahn-soo Choi, "Entropy analysis to classify unknown packing algorithms for malware detection", Int'l Journal of Information Security, Vol. 16, No. 3, pp. 227-248, Jun. 1. 2017.
The proportion of packed malware has been growing rapidly, and now comprises more than 80% of all existing malware. In this paper, we propose a method for classifying the packing algorithms of given unknown packed executables, regardless of whether they are malware or benign programs. First, we scale the entropy values of a given executable and convert the entropy values of a particular location of memory into symbolic representations. Our proposed method uses symbolic aggregate approximation (SAX), which is known to be effective for large data conversions. Second, we classify the distribution of symbols using supervised learning classification methods, i.e., naive Bayes and support vector machines for detecting packing algorithms. The results of our experiments involving a collection of 324 packed benign programs and 326 packed malware programs with 19 packing algorithms demonstrate that our method can identify packing algorithms of given executables with a high accuracy of 95.35 %, a recall of 95.83%, and a precision of 94.13%. We propose four similarity measurements for detecting packing algorithms based on SAX representations of the entropy values and an incremental aggregate analysis. Among these four metrics, the fidelity similarity measurement demonstrates the best matching result, i.e., a rate of accuracy ranging from 95.0% to 99.9%, which is from 2% to 13% higher than that of the other three metrics. Our study confirms that packing algorithms can be identified through an entropy analysis based on a measure of the uncertainty of the running processes and without prior knowledge of the executables.
Munkhbayar Bat-Erdene, Taebeom Kim, Hyundo Park, Heejo Lee, "Packer Detection for Multi-Layer Executables Using Entropy Analysis", Entropy, Vol. 19, No. 3, Mar. 16. 2017.
Packing algorithms are broadly used to avoid anti-malware systems, and the proportion of packed malware has been growing rapidly. However, just a few studies have been conducted on detection various types of packing algorithms in a systemic way. Following this understanding, we elaborate a method to classify packing algorithms of a given executable into three categories: single-layer packing, re-packing, or multi-layer packing. We convert entropy values of the executable file loaded into memory into symbolic representations, for which we used SAX (Symbolic Aggregate Approximation). Based on experiments of 2196 programs and 19 packing algorithms, we identify that precision (97.7%), accuracy (97.5%), and recall ( 96.8%) of our method are respectively high to confirm that entropy analysis is applicable in identifying packing algorithms.
Hyok An, Heejo Lee, Adrian Perrig, "Coordination of Anti-Spoofing Mechanisms in Partial Deployments", Journal of Communications and Networks, Vol. 18, No. 6, pp. 948-961, Dec. 31. 2016.
Internet protocol (IP) spoofing is a serious problem on the Internet. It is an attractive technique for adversaries who wish to amplify their network attacks and retain anonymity. Many approaches have been proposed to prevent IP spoofing attacks; however, they do not address a significant deployment issue, i.e., filtering inefficiency caused by a lack of deployment incentives for adopters. To defeat attacks effectively, one mechanism must be widely deployed on the network; however, the majority of the antispoofing mechanisms are unsuitable to solve the deployment issue by themselves. Each mechanism can work separately; however, their defensive power is considerably weak when insuffi- ciently deployed. If we coordinate partially deployed mechanisms such that they work together, they demonstrate considerably superior performance by creating a synergy effect that overcomes their limited deployment. Therefore, we propose a universal antispoofing (UAS) mechanism that incorporates existing mechanisms to thwart IP spoofing attacks. In the proposed mechanism, intermediate routers utilize any existing anti-spoofing mechanism that can ascertain if a packet is spoofed and records this decision in the packet header. The edge routers of a victim network can estimate the forgery of a packet based on this information sent by the upstream routers. The results of experiments conducted with real Internet topologies indicate that UAS reduces false alarms up to 84.5% compared to the case where each mechanism operates individually.
Jieun Song, Kiryong Lee, Wan Yeon Lee, Heejo Lee, "Integrity verification of the ordered data structures in manipulated video content", Digital Investigation, Vol. 18, pp. 1-7, Sep. 1. 2016.
Video content stored in Video Event Data Recorders (VEDRs) are used as important evidence when certain events such as vehicle collisions occur. However, with sophisticated video editing software, assailants can easily manipulate video records to their advantage without leaving visible clues. Therefore, the integrity of video content recorded through VEDRs cannot be guaranteed, and the number of related forensic issues increases. Existing video integrity detection methods use the statistical properties of the pixels within each frame of the video. However, these methods require ample time, because they check frames individually. Moreover, the frame can easily be replaced and forged using the appropriate public software. To solve this problem, we propose an integrity checking mechanism using the structure of ordered fields in a video file, because existing video editing software does not allow users to access or modify field structures. In addition, because our proposed method involves checking the header information of video content only once, much less detection time is required compared with existing methods that examine the entire frames. We store an ordered file structure of video content as a signature in the database using a customized automated tool. The signature appears according to the video editing software. Then, the suspected video content is compared to a set of signatures. If the file structure matches with a signature, we recognize a manipulated video file by its corresponding editing software. We tested five types of video editing software that cover 99% of the video editing software market share. Furthermore, we arranged 305,981 saving options for all five video editing suites. As a result, we obtained 100% detection accuracy using stored signatures, without false positives, in a collection of 305,981 video files. The principle of this method can be applied to other video formats.
Hongzhe Li, Jaesang Oh, Heejo Lee, "Detecting Violations of Security Requirements for Vulnerability Discovery in Source Code", IEICE Trans. on Information and Systems, Vol. E99-D, No. 9, pp. 2385-2389, Sep. 1. 2016.
Finding software vulnerabilities in source code before the program gets deployed is crucial to ensure the software quality. Existing source code auditing tools for vulnerability detection generate too many false positives, and only limited types of vulnerability can be detected automatically. In this paper, we propose an extendable mechanism to reveal vulnerabilities in source code with low false positives by specifying security requirements and detecting requirement violations of the potential vulnerable sinks. The experimental results show that the proposed mechanism can detect vulnerabilities with zero false positives and indicate the extendability of the mechanism to cover more types of vulnerabilities.
Choongin Lee, Jehyun Lee, Youngbin Pyo, Heejo Lee, "Broken Integrity Detection of Video Files in Video Event Data Recorders", TIIS Trans. on Internet and Information Systems, Vol. 10, No. 8, pp. 3943-3957, Aug. 31. 2016.
As digital evidence has a highly influential role in proving the innocence of suspects, methods for integrity verification of such digital evidence have become essential in the digital forensic field. Most surveillance camera systems are not equipped with proper built-in integrity protection functions. Because digital forgery techniques are becoming increasingly sophisticated, manually determining whether digital content has been falsified is becoming extremely difficult for investigators. Hence, systematic approaches to forensic integrity verification are essential for ascertaining truth or falsehood. We propose an integrity determination method that utilizes the structure of the video content in a Video Event Data Recorder (VEDR). The proposed method identifies the difference in frame index fields between a forged file and an original file. Experiments conducted using real VEDRs in the market and video files forged by a video editing tool demonstrate that the proposed integrity verification scheme can detect broken integrity in video content.
Hyuckmin Kwon, Seulbae Kim, Heejo Lee, "SIGMATA: Storage Integrity Guaranteeing Mechanism against Tampering Attempts for Video Event Data Recorders", Journal of Systemics, Cybernetics and Informatics, Vol. 14, No. 2, pp. 42-47, Aug. 16. 2016.
The usage and market size of video event data recorders (VEDRs), also known as car black boxes, are rapidly increasing. Since VEDRs can provide more visual information about car accident situations than any other device that is currently used for accident investigations (e.g., closed-circuit television), the integrity of the VEDR contents is important to any meaningful investigation. Researchers have focused on the file system integrity or photographic approaches to integrity verification. However, unlike other general data, the video data in VEDRs exhibit a unique I/O behavior in that the videos are stored chronologically. In addition, the owners of VEDRs can manipulate unfavorable scenes after accidents to conceal their recorded behavior. Since prior arts do not consider the time relationship between the frames and fail to discover frame-wise forgery, a more detailed integrity assurance is required. In this paper, we focus on the development of a frame-wise forgery detection mechanism that resolves the limitations of previous mechanisms. We introduce SIGMATA, a novel storage integrity guaranteeing mechanism against tampering attempts for VEDRs. We describe its operation, demonstrate its effectiveness for detecting possible frame-wise forgery, and compare it with existing mechanisms. The result shows that the existing mechanisms fail to detect any frame-wise forgery, while our mechanism thoroughly detects every frame-wise forgery. We also evaluate its computational overhead using real VEDR videos. The results show that SIGMATA indeed discovers frame-wise forgery attacks effectively and efficiently, with the encoding overhead less than 1.5 milliseconds per frame.
Jonghoon Kwon, Jehyun Lee, Heejo Lee, Adrian Perrig, "PsyBoG: A scalable botnet detection method for large-scale DNS traffic", Computer Networks, Vol. 97, pp. 48-73, Mar. 14. 2016.
Domain Name System (DNS) traffic has become a rich source of information from a security perspective. However, the volume of DNS traffic has been skyrocketing, such that security analyzers experience difficulties in collecting, retrieving, and analyzing the DNS traffic in response to modern Internet threats. More precisely, much of the research relating to DNS has been negatively affected by the dramatic increase in the number of queries and domains. This phenomenon has necessitated a scalable approach, which is not dependent on the volume of DNS traffic. In this paper, we introduce a fast and scalable approach, called PsyBoG, for detecting malicious behavior within large volumes of DNS traffic. PsyBoG leverages a signal processing technique, power spectral density (PSD) analysis, to discover the major frequencies resulting from the periodic DNS queries of botnets. The PSD analysis allows us to detect sophisticated botnets regardless of their evasive techniques, sporadic behavior, and even normal users’ traffic. Furthermore, our method allows us to deal with large-scale DNS data by only utilizing the timing information of query generation regardless of the number of queries and domains. Finally, PsyBoG discovers groups of hosts which show similar patterns of malicious behavior. PsyBoG was evaluated by conducting experiments with two different data sets, namely DNS traces generated by real malware in controlled environments and a large number of real-world DNS traces collected from a recursive DNS server, an authoritative DNS server, and Top- Level Domain (TLD) servers. We utilized the malware traces as the ground truth, and, as a result, PsyBoG performed with a detection accuracy of 95%. By using a large number of DNS traces, we were able to demonstrate the scalability and effectiveness of PsyBoG in terms of practical usage. Finally, PsyBoG detected 23 unknown and 26 known botnet groups with 0.1% false positives.
Wan Yeon Lee, Hyuckmin Kwon, Heejo Lee, "Comments on the Linux FAT32 allocator and file creation order", Digital Investigation, Vol. 15, pp. 119-123, Dec. 1. 2015.
Minnaard proposed a novel method that constructs a creation time bound of files recovered without time information. The method exploits a relationship between the creation order of files and their locations on a storage device managed with the Linux FAT32 file system. This creation order reconstruction method is valid only in non-wraparound situations, where the file creation time in a former position is earlier than that in a latter position. In this article, we show that if the Linux FAT32 file allocator traverses the storage space more than once, the creation time of a recovered file is possibly earlier than that of a former file and possibly later than that of a latter file on the Linux FAT32 file system. Also it is analytically verified that there are at most n candidates for the creation time bound of each recovered file where n is the number of traversals by the file allocator. Our analysis is evaluated by examining file allocation patterns of two commercial in-car dashboard cameras.
Yu Seung Kim, Patrick Tague, Heejo Lee, Hyogon Kim, "A jamming approach to enhance enterprise Wi-Fi secrecy through spatial access control", Wireless Networks, Vol. 21, No. 8, pp. 2631-2647, Nov. 1. 2015.
Prevalent Wi-Fi networks have adopted various protections to prevent eavesdropping caused by the intrinsic shared nature of wireless medium. However, many of them are based on pre-shared secret incurring key management costs, and are still vulnerable from practical countermeasures. In this study, we investigate the feasibility of using defensive jamming technique to protect enterprise Wi-Fi networks from potential eavesdroppers. This non-cryptographic approach requires neither any pre-shared key or high deployment costs. Defensive jammers geographically confine the wireless coverage of Wi-Fi access point, and thus block the message reception outside an arbitrary boundary at a physical layer. We provide a theoretical model fine tuning the jamming parameters for jammer placement. We then discuss practical considerations including optimized jammer arrangement algorithms, interference countermeasures to legitimate communications, and countermeasures against advanced attackers.
Wade Trappe, Lalitha Sankar, Radha Poovendran, Heejo Lee, Srdjan Capkun, "Introduction to the Issue on Signal and Information Processing for Privacy", IEEE Journal of Selected Topics in Signal Processing, Vol. 9, No. 7, pp. 1173-1175, Oct. 1. 2015.
Jonghoon Kwon, Dongwon Seo, Minjin Kwon, Heejo Lee, Adrian Perrig, Hyogon Kim, "An incrementally deployable anti-spoofing mechanism for software-defined networks", Computer Communications, Vol. 64, pp. 1-20, Jun. 15. 2015.
Internet attacks often use IP spoofing to forge the source IP address of packets, and thereby hide the identity of the source. It causes many serious security problems such as the difficulty of packet authenticity and IP traceback. While many IP spoofing prevention techniques have been proposed apart from ingress filtering, none have achieved widespread real-world use. One main reason is the lack of properties favoring incremental deployment, an essential component for new technology adoption. An incrementally deployable protocol should have three properties: initial benefits for early adopters, incremental benefits for subsequent adopters, and effectiveness under partial deployment. Since no previous anti-spoofing solution satisfies all three properties, we propose an anti-spoofing mechanism called “BGP-based Anti-Spoofing Extension” (BASE). BASE is an anti-spoofing protocol designed to fulfill the incremental deployment properties. Furthermore, BASE is designed to work in the software-defined networks (SDN). It gives a motivation to network operators to adopt BASE into their network, since the idea of SDN supports the large scale network control with a simple operation. Based on simulations using a model of Internet connectivity, BASE shows desirable IP spoofing prevention capabilities under partial deployment. We find that just 30% deployment can drop about 97% of attack packets. It is shown that BASE not only provides benefits to early adopters, but also outperforms previous anti-spoofing mechanisms.
Hongzhe Li, Hyuckmin Kwon, Jonghoon Kwon, Heejo Lee, "CLORIFI: Software Vulnerability Discovery using Code Clone Verification", Concurrency and Computation: Practice and Experience, Vol. 28, No. 6, pp. 1900-1917, Apr. 14. 2015.
Software vulnerability has long been considered an important threat to the system safety. A vulnerability is often reproduced because of the frequent code reuse by programmers. Security patches are usually not propagated to all code clones; however, they could be leveraged to discover unknown vulnerabilities. Static code auditing approaches are frequently proposed to scan source codes for security flaws; unfortunately, these approaches generate too many false positives. While dynamic execution analysis methods can precisely report vulnerabilities, they are ineffective in path exploration, which limits them to scale to large programs. With the purpose of detecting vulnerability in a scalable way with more preciseness, in this paper, we propose a novel mechanism, called software vulnerability discovery using Code Clone Verification (CLORIFI), that scalably discovers vulnerabilities in real world programs using code clone verification. In the beginning, we use a fast and scalable syntax-based way to find code clones in program source codes based on released security patches. Subsequently, code clones are being verified using concolic testing to dramatically decrease the false positives. In addition, we mitigate the path explosion problem by backward sensitive data tracing in concolic execution. Experiments have been conducted with real-world open-source projects (recent Linux OS distributions and program packages). As a result, we found 7 real vulnerabilities out of 63 code clones from Ubuntu 14.04 LTS (Canonical, London, UK) and 10 vulnerabilities out of 40 code clones from CentOS 7.0 (The CentOS Project(community contributed)). Furthermore, we confirmed more code clone vulnerabilities in various versions of programs including Rsyslog (Open Source(Original author: Rainer Gerhards)), Apache (Apache Software Foundation, Forest Hill, Maryland, USA) and Firefox (Mozilla Corporation, Mountain View, California, USA). In order to evaluate the effectiveness of vulnerability verification in a systematic way, we also utilized Juliet Test Suite as measurement objects. The results show that CLORIFI achieves 98% accuracy with 0 false positives.
Jehyun Lee, Suyeon Lee, Heejo Lee, "Screening smartphone applications using malware family signatures", Computers & Security, Vol. 52, pp. 234-249, Feb. 21. 2015.
The sharp increase in smartphone malware has become one of the most serious security problems. Since the Android platform has taken the dominant position in smartphone popularity, the number of Android malware has grown correspondingly and represents critical threat to the smartphone users. This rise in malware is primarily attributable to the occurrence of variants of existing malware. A set of variants stem from one malware can be considered as one malware family, and malware families cover more than half of the Android malware population. A conventional technique for defeating malware is the use of signature matching which is efficient from a time perspective but not very practical because of its lack of robustness against the malware variants. As a counter approach for handling the issue of variants behavior analysis techniques have been proposed but require extensive time and resources. In this paper, we propose an Android malware detection mechanism that uses automated family signature extraction and family signature matching. Key concept of the mechanism is to extract a set of family representative binary patterns from evaluated family members as a signature and to classify each set of variants into a malware family via an estimation of similarity to the signatures. The proposed family signature and detection mechanism offers more flexible variant detection than does the legacy signature matching, which is strictly dependent on the presence of a specific string. Furthermore, compared with the previous behavior analysis techniques considering family detection, the proposed family signature has higher detection accuracy without the need for the significant overhead of data and control flow analysis. Using the proposed signature, we can detect new variants of known malware efficiently and accurately by static matching. We evaluated our mechanism with 5846 real world Android malware samples belonging to 48 families collected in April 2014 at an anti-virus company; experimental results showed that; our mechanism achieved greater than 97% accuracy in detection of variants. We also demonstrated that the mechanism has a linear time complexity with the number of target applications.
Sangwook LEE, Ji Eun SONG, Wan Yeon LEE, Young Woong KO, Heejo LEE, "Integrity Verification Scheme of Video Contents in Surveillance Cameras for Digital Forensic Investigations", IEICE Trans. on Information and Systems, Vol. E98-D, No. 1, pp. 95-97, Jan. 1. 2015.
For digital forensic investigations, the proposed scheme verifies the integrity of video contents in legacy surveillance camera systems with no built-in integrity protection. The scheme exploits video frames remaining in slack space of storage media, instead of timestamp information vulnerable to tampering. The scheme is applied to integrity verification of video contents formatted with AVI or MP4 files in automobile blackboxes.
Ilju Seo, Heejo Lee, Seungchul Han, "Cylindrical Coordinates Security Visualization for multiple domain command and control botnet detection", Computers & Security, Vol. 46, pp. 141-153, Oct. 1. 2014.
The botnets are one of the most dangerous species of network-based attack. They cause severe network disruptions through massive coordinated attacks nowadays and the results of this disruption frequently cost enterprises large sums in financial losses. In this paper, we make an in-depth investigation on the issue of botnet detection and present a new security visualization tool for visualizing botnet behaviors on DNS traffic. The core mechanism is developed with the objective of enabling users to recognize security threats promptly and mitigate the damages by only visualizing DNS traffic in cylindrical coordinates. We compare our visualization method with existing ones and the experimental results show that ours has greater perceptual efficiency. The ideas and results of this study will contribute toward designing an advanced visualization technique that offers better security. Also, the approach proposed in this study can be utilized to derive new and valuable insights in security aspects from the complex correlations of Big Data.
Jehyun Lee, Heejo Lee, "GMAD: Graph-based Malware Activity Detection by DNS traffic analysis", Computer Communications, Vol. 49, No. 1, pp. 33-47, Aug. 1. 2014.
Malicious activities on the Internet are one of the most dangerous threats to Internet users and organizations. Malicious software controlled remotely is addressed as one of the most critical methods for executing the malicious activities. Since blocking domain names for command and control (C&C) of the malwares by analyzing their Domain Name System (DNS) activities has been the most effective and practical countermeasure, attackers attempt to hide their malwares by adopting several evasion techniques, such as client sub-grouping and domain flux on DNS activities. A common feature of the recently developed evasion techniques is the utilization of multiple domain names for render malware DNS activities temporally and spatially more complex. In contrast to analyzing the DNS activities for a single domain name, detecting the malicious DNS activities for multiple domain names is not a simple task. The DNS activities of malware that uses multiple domain names, termed multi-domain malware, are sparser and less synchronized with respect to space and time. In this paper, we introduce a malware activity detection mechanism, GMAD: Graph-based Malware Activity Detection that utilizes a sequence of DNS queries in order to achieve robustness against evasion techniques. GMAD uses a graph termed Domain Name Travel Graph which expresses DNS query sequences to detect infected clients and malicious domain names. In addition to detecting malware C&C domain names, GMAD detects malicious DNS activities such as blacklist checking and fake DNS querying. To detect malicious domain names utilized to malware activities, GMAD applies domain name clustering using the graph structure and determines malicious clusters by referring to public blacklists. Through experiments with four sets of DNS traffic captured in two ISP networks in the U.S. and South Korea, we show that GMAD detected thousands of malicious domain names that had neither been blacklisted nor detected through group activity of DNS clients. In a detection accuracy evaluation, GMAD showed an accuracy rate higher than 99% on average, with a higher than 90% precision and lower than 0:5% false positive rate. It is shown that the proposed method is effective for detecting multi-domain malware activities irrespective of evasion techniques.
Dongwon Seo, Heejo Lee, Adrian Perrig, "APFS: Adaptive Probabilistic Filter Scheduling against Distributed Denial-of-Service Attacks", Computers & Security, Vol. 39, Part B, pp. 366-385, Nov. 1. 2013.
Distributed denial-of-service (DDoS) attacks are considered to be among the most crucial security challenges in current networks because they significantly disrupt the availability of a service by consuming extreme amount of resource and/or by creating link congestions. One type of countermeasure against DDoS attacks is a filter-based approach where filter- based intermediate routers within the network coordinate with each other to filter undesired flows. The key to success for this approach is effective filter propagation and management techniques. However, existing filter-based approaches do not consider effective filter propagation and management. In this paper, we define three necessary properties for a viable DDoS solution: how to practically propagate filters, how to place filters to effective filter routers, and how to manage filters to maximize the efficacy of the defense. We propose a novel mechanism, called Adaptive Probabilistic Filter Scheduling (APFS), that effectively defends against DDoS attacks and also satisfies the three necessary properties. In APFS, a filter router adaptively calculates its own marking probability based on three factors: 1) hop count from a sender, 2) the filter router’s resource availability, and 3) the filter router’s link degree. That is, a filter router that is closer to attackers, has more available resources, or has more connections to neighbors inserts its marking with a higher probability. These three factors lead a victim to receive more markings from more effective filter routers, and thus, filters are quickly distributed to effective filter routers. Moreover, each filter router manages multiple filters using a filter scheduling policy that allows it to selectively keep the most effective filters depending on attack situations. Experimental results show that APFS has a faster filter propagation and a higher attack blocking ratio than existing approaches that use fixed marking probability. In addition, APFS has a 44% higher defense effectiveness than existing filter-based approaches that do not use a filter scheduling policy.
Dongwon Seo, Heejo Lee, Ejovi Nuwere, "SIPAD: SIP-VoIP Anomaly Detection using a Stateful Rule Tree", Computer Communications, Vol. 36, No. 5, pp. 562-574, Mar. 1. 2013.
Voice over IP (VoIP) services have become prevalent lately because of their potential advantages such as economic efficiency and useful features. Meanwhile, Session Initiation Protocol (SIP) is being widely used as a session protocol for the VoIP services. Many mobile VoIP applications have recently been launched, and they are becoming attractive targets for attackers to steal private information. In particular, malformed SIP messages and SIP flooding attacks are the most significant attacks as they cause service disruption by targeting call procedures and system resources. Although much research has been conducted in an effort to address the problems, they remain unresolved challenges due to the ease of launching variants of attacks. In this paper, we propose a stateful SIP inspection mechanism, called SIP?VoIP Anomaly Detection (SIPAD), that leverages a SIP-optimized data structure to detect malformed SIP messages and SIP flooding attacks. SIPAD precomputes the SIP-optimized data structure (termed a stateful rule tree) that reorganizes the SIP rule set by hierarchical correlation. Depending on the current state and the message type, SIPAD determines the corresponding branches from the stateful rule tree, and inspects a SIP message’s structure by comparing it to the branches. The SIP-optimized rule tree provides higher detection accuracy, wider detection coverage and faster detection than existing approaches. Conventional SIP inspection schemes tend to have high overhead costs due to the complexity of their rule matching schemes. Experimental results of our SIP-optimized approach, by contrast, indicate that it dramatically reduces overhead and can even be deployed in resource-constrained environments such as smartphones.
Hyunsang Choi, Heejo Lee, "Identifying botnets by capturing group activities in DNS traffic", Computer Networks, Vol. 56, No. 1, pp. 20-33, Jan. 12. 2012.
Botnets have become the main vehicle to conduct online crimes such as DDoS, spam, phishing and identity theft. Even though numerous efforts have been directed towards detection of botnets, evolving evasion techniques easily thwart detection. Moreover, existing approaches can be overwhelmed by the large amount of data needed to be analyzed. In this paper, we propose a light-weight mechanism to detect botnets using their fundamental characteristics, i.e., group activity. The proposed mechanism, referred to as BotGAD (botnet group activity detector) needs a small amount of data from DNS traffic to detect botnet, not all network traffic content or known signatures. BotGAD can detect botnets from a large-scale network in real- time even though the botnet performs encrypted communications. Moreover, BotGAD can detect botnets that adopt recent evasion techniques. We evaluate BotGAD using multiple DNS traces collected from different sources including a campus network and large ISP networks. The evaluation shows that BotGAD can automatically detect botnets while providing real-time monitoring in large scale networks.
Yilin Mo, Tiffany Hyunjin Kim, Kenneth Brancik, Dona Dickinson, Heejo Lee, Adrian Perrig, Bruno Sinopoli, "Cyber-Physical Security of a Smart Grid Infrastructure", Proceedings of the IEEE, Vol. 100, No. 1, pp. 195-209, Jan. 1. 2012.
It is often appealing to assume that existing solutions can be directly applied to emerging engineering domains. Unfortunately, careful investigation of the unique challenges presented by new domains exposes its idiosyncrasies, thus often requiring new approaches and solutions. In this paper, we argue that the Bsmart[ grid, replacing its incredibly successful and reliable predecessor, poses a series of new security challenges, among others, that require novel approaches to the field of cyber security. We will call this new field cyber? physical security. The tight coupling between information and communication technologies and physical systems introduces new security concerns, requiring a rethinking of the commonly used objectives and methods. Existing security approaches are either inapplicable, not viable, insufficiently scalable, incompatible, or simply inadequate to address the challenges posed by highly complex environments such as the smart grid. A concerted effort by the entire industry, the research community, and the policy makers is required to achieve the vision of a secure smart grid infrastructure.
Wan Yeon Lee, Hyogon Kim, Heejo Lee, "Minimum-Energy Semi-Static Scheduling of a Periodic Real-Time Task on DVFS-Enabled Multi-Core Processors", IEICE Trans. on Information and Systems, Vol. E94-D, No. 12, pp. 2389-2392, Dec. 1. 2011.
The proposed scheduling scheme minimizes the energy consumption of a real-time task on the multi-core processor with the dynamic voltage and frequency scaling capability. The scheme allocates a pertinent number of cores to the task execution, inactivates unused cores, and assigns the lowest frequency meeting the deadline. For a periodic real-time task with consecutive real-time instances, the scheme prepares the minimum-energy solutions for all input cases at off-line time, and applies one of the prepared solutions to each real-time instance at runtime.
Sunghyun Kim, Heejo Lee, "Classifying Rules by In-out Traffic Direction to Avoid Security Policy Anomaly", TIIS Trans. on Internet and Information Systems, Vol. 4, No. 4, pp. 671- 690, Aug. 27. 2010.
The continuous growth of attacks in the Internet causes to generate a number of rules in security devices such as Intrusion Prevention Systems, firewalls, etc. Policy anomalies in security devices create security holes and prevent the system from determining quickly whether allow or deny a packet. Policy anomalies exist among the rules in multiple security devices as well as in a single security device. The solution for policy anomalies requires complex and complicated algorithms. In this paper, we propose a new method to remove policy anomalies in a single security device and avoid policy anomalies among the rules in distributed security devices. The proposed method classifies rules according to traffic direction and checks policy anomalies in each device. It is unnecessary to compare the rules for outgoing traffic with the rules for incoming traffic. Therefore, classifying rules by in-out traffic, the proposed method can reduce the number of rules to be compared up to a half. Instead of detecting policy anomalies in distributed security devices, one adopts the rules from others for avoiding anomaly. After removing policy anomalies in each device, other firewalls can keep the policy consistency without anomalies by adopting the rules of a trusted firewall. In addition, it blocks unnecessary traffic because a source side sends as much traffic as the destination side accepts. Also we explain another policy anomaly which can be found under a connection-oriented communication protocol.
Sunghyun Kim, Heejo Lee, "Abnormal Policy Detection and Correction Using Overlapping Transition", IEICE Trans. on Information and Systems, Vol. E93-D, No. 5, May. 1. 2010.
Policy in security devices such as firewalls and Network Intrusion Prevention Systems(NIPS) is usually implemented as a sequence of rules. This allows network packets to proceed or to be discarded based on rule's decision. Since attack methods are increasing rapidly, a huge number of security rules are gener- ated and maintained in security devices. Under attack or during heavy traffic, the policy configured wrong creates security holes and prevents the system from deciding quickly whether to allow or deny a packet. Anomalies between the rules occur when there is overlap among the rules. In this paper, we propose a new method to detect anomalies among rules and generate new rules without configuration error in multiple security devices as well as in a single security device. The proposed method cuts the overlap regions among rules into minimum overlap regions and finds the abnormal domain regions of rules' predicates. Classifying rules by the network traffic flow, the proposed method not only reduces computation overhead but blocks unnecessary traffic among distributed devices
Xuan Hung Le, Sungyoung Lee, Young-Koo Lee, Heejo Lee, Murad Khalid, Ravi Sankar, "Activity-oriented access control to ubiquitous hospital information and services", Information Sciences, Vol. 180, No. 16, pp. 2979-2990, Apr. 19. 2010.
In hospital information systems, protecting the confidentiality of health information, whilst at the same time allowing authorized physicians to access it conveniently, is a crucial requirement. The need to deliver health information at the point-of-care is a primary factor to increase healthcare quality and cost efficiency. However, current systems require considerable coordination effort of hospital professionals to locate relevant documents to support a specific activity. This paper presents a flexible and dynamic access control model, Activity-Oriented Access Control (AOAC), which is based on user activity to authorize access permissions. A user is allowed to perform an activity if he/she holds a number of satisfactory attributes (i.e. roles, assignments, etc.) under a specified condition (e.g. time, location). Results of AOAC implementation in a realistic healthcare scenario have shown to meet two important requirements: protecting confidentiality of health information by denying an unauthorized access, and allowing physicians to conveniently browse medical data at the point-of-care. Furthermore, the average execution time was 0.078 s which allows AOAC to work in real-time.
Khaliq-ur-Rahman Raazi Syed Muhammad, Heejo Lee, Sungyoung Lee, Young-Koo Lee, "BARI+: A Biometric Based Distributed Key Management Approach for Wireless Body Area Networks", Sensors, Vol. 10, pp. 3911 - 3933, Apr. 16. 2010.
Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain.
Riaz Ahmed Shaikh, Brian J. d'Auriol, Heejo Lee, Sungyoung Lee, "Privacy and Trust Management Schemes of Wireless Sensor Networks: A Survey", Handbook of Research on Developments and Trends in Wireless Sensor Networks: From Principle to Practice, Feb. 9. 2010.
Until recently, researchers have focused on the cryptographic-based security issues more intensively than the privacy and trust issues. However, without the incorporation of trust and privacy features, cryptographic-based security mechanisms are not capable of singlehandedly providing robustness, reliability and completeness in a security solution. In this chapter, we present generic and flexible taxonomies of privacy and trust. We also give detailed critical analyses of the state-of-the-art research, in the field of privacy and trust that is currently not available in the literature. This chapter also highlights the challenging issues and problems.
Riaz Ahmed Shaikh, Hassan Jameel, Brian J. d'Auriol, Heejo Lee, Sungyoung Lee, Young-Jae Song, "Achieving Network Level Privacy in Wireless Sensor Networks", Sensors, Vol. 10, pp. 1447-1472, Feb. 2010.
Full network level privacy has often been categorized into four sub-categories: Identity, Route, Location and Data privacy. Achieving full network level privacy is a critical and challenging problem due to the constraints imposed by the sensor nodes (e.g., energy, memory and computation power), sensor networks (e.g., mobility and topology) and QoS issues (e.g., packet reach-ability and timeliness). In this paper, we proposed two new identity, route and location privacy algorithms and data privacy mechanism that addresses this problem. The proposed solutions provide additional trustworthiness and reliability at modest cost of memory and energy. Also, we proved that our proposed solutions provide protection against various privacy disclosure attacks, such as eavesdropping and hop-by-hop trace back attacks.
Wan Yeon Lee, Soo Kim, Heejo Lee, Hyogon Kim, "Enhancing Resiliency of Networks: Evolving Strategy vs.Multihoming", IEICE Trans. on Communications, Vol. E93-B, No. 1, pp. 174-177, Jan. 2010.
Network resiliency has become crucial as the failure of a group of networks happens more frequently, being caused by either natural disasters or malicious attacks. In order to enhance the resiliency of the Internet, we show that changing the evolving strategy is more important than increasing the number of links by multihoming, which connects a single network with two or more links. From the simulation with Internet topologies, it is shown that the resiliency of the Internet can be enhanced by replacing the current evolving strategy only in part.
Xuan Hung Le, Sungyoung Lee, Ismail Butun, Murad Khalid, Ravi Sankar, Miso (Hyoung-IL) Kim, Manhyung Han, Young-Koo Lee, Heejo Lee, "An Energy-Efficient Access Control Scheme for Wireless Sensor Networks based on Elliptic Curve Cryptography", Journal of Communications and Networks, Vol. 11, No. 6, pp. 599-606, Dec. 2009.
For many mission-critical related wireless sensor network applications such as military and homeland security, user’s access restriction is necessary to be enforced by access control mechanisms for different access rights. Public key-based access control schemes are more attractive than symmetric-key based approaches due to high scalability, low memory requirement, easy key-addition/revocation for a new node, and no key predistribution requirement. Although Wang et al. recently introduced a promising access control scheme based on elliptic curve cryptography (ECC), it is still burdensome for sensors and has several security limitations (it does not provide mutual authentication and is strictly vulnerable to denial-of-service (DoS) attacks). This paper presents an energy-efficient access control scheme based on ECC to overcome these problems and more importantly to provide dominant energy-efficiency. Through analysis and simulation based evaluations, we show that the proposed scheme overcomes the security problems and has far better energy-efficiency compared to current scheme proposed byWang et al.
Adrian Perrig, Wade Trappe, Virgil Gligor, Radha Poovendran, Heejo Lee, "Secure Wireless Networking", Journal of Communications and Networks, Vol. 11, No. 6, pp. 533-537, Dec. 2009.
Riaz Ahmed Shaikh, Hassan Jameel, Brian J. d'Auriol, Heejo Lee, Sungyoung Lee, Young-Jae Song, "Group-Based Trust Management Scheme for Clustered Wireless Sensor Networks ", IEEE Trans. on Parallel and Distributed Systems, Vol. 20, No. 11, pp. 1698-1712, Nov. 2009.
Traditional trust management schemes developed for wired and wireless ad hoc networks are not well suited for sensor networks due to their higher consumption of resources such as memory and power. In this work, we propose a new lightweight Group-based Trust Management Scheme (GTMS) for wireless sensor networks, which employs clustering. Our approach reduces the cost of trust evaluation. Also, theoretical as well as simulation results show that our scheme demands less memory, energy, and communication overheads as compared to the current state-of-the-art trust management schemes and it is more suitable for large-scale sensor networks. Furthermore, GTMS also enables us to detect and prevent malicious, selfish, and faulty nodes.
Hyundo Park, Hyogon Kim, Heejo Lee, "Is early warning of an imminent worm epidemic possible?", IEEE Network, Vol. 23, No. 5, pp. 14-20, Oct. 2. 2009.
This article introduces a novel anomaly detection method that makes use of only matrix operations and is highly sensitive to randomness in traffic. The sensitivity can be leveraged to detect attacks that exude randomness in traffic characteristics, such as denial-of-service attacks and worms. In particular, we show that the method can be used to alert of the imminent onset of a worm epidemic in a statistically sound manner, irrespective of the worm's scanning strategies.
Sunghyun Kim, Heejo Lee, "Reducing Payload Inspection Cost Using Rule Classification for Fast Attack Signature Matching", IEICE Trans. on Information and Systems, Vol. E92-D, No. 10, Oct. 1. 2009.
Network intrusion detection systems rely on a signature-based detection engine. When under attack or during heavy tra
Wanyeon Lee, Hyogon Kim, Heejo Lee, "Maximum-Utility Scheduling of Operation Modes with Probabilistic Task Execution Times under Energy Constraints", IEEE Trans. on Computer-aided Design of Integrated Circuits and System, Vol. 28, No. 10, pp. 1531-1544, Oct. 2009.
We propose a novel scheduling scheme that determines the instant operation modes of multiple tasks. The tasks have probabilistic execution times and are executed on discrete operation modes providing different utilities with different energy consumptions. We first design an optimal offline scheduling scheme that stochastically maximizes the cumulative utility of the tasks under energy constraints, at the cost of heavy computational overhead. Next, the optimal offline scheme is modified to an approximate online scheduling scheme. The online scheme has little runtime overhead and yields almost the maximum utility, with an energy budget that is given at runtime. The difference between the maximum utility and the output utility of the online scheme is bounded by a controllable input value. Extensive evaluation shows that the output utility of the online scheme approaches the maximum utility in most cases, and is much higher than that of existing methods by up to 50% of the largest utility difference among available operation modes.
Riaz Ahmed Shaikh, Hassan Jameel, Brian J. d'Auriol, Heejo Lee, Sungyoung Lee, Young-Jae Song, "Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks", Sensors, Vol. 9, pp. 5989-6007, Sep. 2009.
Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.
Hyunsang Choi, Heejo Lee, Hyogon Kim, "Fast Detection and Visualization of Network Attacks on Parallel Coordinates", Computers & Security, Vol. 28, No. 5, pp. 276-288, Jul. 2009.
This article presents what we call the parallel coordinate attack visualization (PCAV) for detecting unknown large-scale Internet attacks including Internet worms, DDoS attacks and network scanning activities. PCAV displays network traffic on the plane of parallel coordinates using the flow information such as the source IP address, destination IP address, destination port and the average packet length in a flow. The parameters are used to draw each flow as a connected line on the plane, where a group of polygonal lines form a particular shape in case of attack. From the observation that each attack type of significance forms a unique pattern, we develop nine signatures and their detection mechanism based on an efficient hashing algorithm. Using the graphical signatures, PCAV can quickly detect new attacks and enable network administrators to intuitively recognize and respond to the attacks. Compared with existing visualization works, PCAV can handle hyper-dimensions, i.e., can visualize more than 3 parameters if necessary, which significantly reduces false positives. As a consequence, Internet worms are more precisely detectable by machine and more easily recognizable by human. Another strength of PCAV is handling flows instead of packets. Per-flow visualization greatly reduces the processing time and further provides compatibility with legacy routers which export flow information, e.g., as NetFlow does in Cisco routers. We demonstrate the effectiveness of PCAV using real-life Internet traffic traces. The PCAV program is publicly available.
Muhammad Khaliq-ur-R, Heejo Lee, Sungyoung Lee, Young-Koo Lee, "MUQAMI+: a scalable and locally distributed key management scheme for clustered sensor networks", Annals of Telecommunications, Jul. 2009.
Wireless sensor networks (WSN) are susceptible to node capture and many network levels attacks. In order to provide protection against such threats, WSNs require lightweight and scalable key management schemes because the nodes are resourceconstrained and high in number. Also, the effect of node compromise should be minimized and node capture should not hamper the normal working of a network. In this paper, we present an exclusion basis system-based key management scheme called MUQAMI+ for large-scale clustered sensor networks. We have distributed the responsibility of key management to multiple nodes within clusters, avoiding single points of failure and getting rid of costly inter-cluster communication. Our scheme is scalable and highly efficient in terms of re-keying and compromised node revocation.
LeXuan Hung, Sungyoung Lee, Young-Koo Lee, Heejo Lee, "SCODE: A Secure Coordination-Based Data Dissemination to Mobile Sinks in Sensor Networks", IEICE Trans. on Communications, Vol. E92-B, No. 1, pp. 131-142, Jan. 2009.
For many sensor network applications such as military, homeland security, it is necessary for users (sinks) to access sensor networks while they are moving. However, sink mobility brings new challenges to secure routing in large-scale sensor networks. Mobile sinks have to constantly propagate their current location to all nodes, and these nodes need to exchange messages with each other so that the sensor network can establish and maintain a secure multi-hop path between a source node and a mobile sink. This causes significant computation and communication overhead for sensor nodes. Previous studies on sink mobility have mainly focused on efficiency and effectiveness of data dissemination without security consideration. In this paper, we propose a secure and energy-efficient data dissemination protocol -- Secure COodination-based Data dissEmination (SCODE) -- for mobile sinks in sensor networks. We take advantages of coordination networks (grid structure) based on Geographical Adaptive Fidelity (GAF) protocol to construct a secure and efficient routing path between sources and sinks. Our security analysis demonstrates that the proposed protocol can defend against common attacks in sensor network routing such as replay attacks, selective forwarding attacks, sinkhole and wormhole, Sybil attacks, HELLO flood attacks. Our performance evaluation both in mathematical analysis and simulation shows that the SCODE significantly reduces communication overhead and energy consumption while the latency is similar compared with the existing routing protocols, and it always delivers more than 90 percentage of packets successfully.
Jin-Ho Kim, Heejo Lee, Saewoong Bahk, "A Connection Management Protocol for Stateful Inspection Firewalls in Multi-Homed - Networks", Journal of communications and networks, Vol. 10, No. 4, pp. 455-464, Dec. 2008.
To provide network services consistently under various network failures, enterprise networks increasingly utilize path diversity through multi-homing. As a result, multi-homed non-transit autonomous systems become to surpass single-homed networks in number. In this paper, we address an inevitable problem that occurs when networks with multiple entry points deploy firewalls in their borders. The majority of today’s firewalls use stateful inspection that exploits connection state for fine-grained control. However, stateful inspection has a topological restriction such that outgoing and incoming traffic of a connection should pass through a single firewall to execute desired packet filtering operation. Multi-homed networking environments suffer from this restriction and BGP policies provide only coarse control over communication paths. Due to these features and the characteristics of datagram routing, there exists a real possibility of asymmetric routing. This mismatch between the exit and entry firewalls for a connection causes connection establishment failures. In this paper, we formulate this phenomenon into a state-sharing problem among multiple firewalls under asymmetric routing condition. To solve this problem, we propose a stateful inspection protocol that requires very low processing and messaging overhead. Our protocol consists of the following two phases: 1) Generation of a TCP SYN cookie marked with the firewall identification number upon a SYN packet arrival, and 2) state sharing triggered by a SYN/ACK packet arrival in the absence of the trail of its initial SYN packet. We demonstrate that our protocol is scalable, robust, and simple enough to be deployed for high speed networks. It also transparently works under any client-server configurations. Last but not least, we present
Le Xuan Hung, Ngo Trong Canh, Sungyoung Lee, Young-Koo Lee, Heejo Lee, "An Energy-Efficient Secure Routing and Key Management Scheme for Mobile Sinks in Wireless Sensor Networks Using Deployment Knowledge", Sensors, Vol. 8, pp. 7753-7782, Dec. 2008.
For many sensor network applications such as military or homeland security, it is essential for users (sinks) to access the sensor network while they are moving. Sink mobility brings new challenges to secure routing in large-scale sensor networks. Previous studies on sink mobility have mainly focused on efficiency and effectiveness of data dissemination without security consideration. Also, studies and experiences have shown that considering security during design time is the best way to provide security for sensor network routing. This paper presents an energy-efficient secure routing and key management for mobile sinks in sensor networks, called SCODEplus. It is a significant extension of our previous study in five aspects: (1) Key management scheme and routing protocol are considered during design time to increase security and efficiency; (2) The network topology is organized in a hexagonal plane which supports more efficiency than previous square-grid topology; (3) The key management scheme can eliminate the impacts of node compromise attacks on links between non-compromised nodes; (4) Sensor node deployment is based on Gaussian distribution which is more realistic than uniform distribution; (5) No GPS or like is required to provide sensor node location information. Our security analysis demonstrates that the proposed scheme can defend against common attacks in sensor networks including node compromise attacks, replay attacks, selective forwarding attacks, sinkhole and wormhole, Sybil attacks, HELLO flood attacks. Both mathematical and simulation-based performance evaluation show that the SCODEplus significantly reduces the communication overhead, energy consumption, packet delivery latency while it always delivers more than 97 percent of packets successfully.
Hassan Jameel, Riaz Ahmed Shaikh, Heejo Lee, SungYoung lee, "Human Identification Through Image Evaluation Using Secret Predicates", Lecture Notes in Computer Science, Vol. 4377, pp. 67-84, Dec. 1. 2007.
The task of developing protocols for humans to securely authenticate themselves to a remote server has been an interesting topic in cryptography as a replacement for the traditional, less secure, password based systems. The protocols proposed in literature are based on some underlying difficult mathematical problem, which are tuned so as to make them easily computable by humans. As a result these protocols are easily broken when desired to be efficiently executable. We present a Human Identification Protocol based on the ability of humans to efficiently process an image given a secret predicate. It is a challenge-response protocol in which a subset of images presented satisfies a secret predicate shared by the challenger and the user. We conjecture that it is hard to guess this secret predicate for adversaries, both humans and programs. It can be efficiently executed by humans with the knowledge of the secret which in turn is easily memorable and replaceable. We prove the security of the protocol separately for human adversaries and programs based on two separate assumptions and justify these assumptions with the help of an example implementation.
Hyogon Kim, Jongwon Yoon, Heejo Lee, "Error Bound of Collision Probability Estimation in Non-saturated IEEE 802.11 WLANs", IEICE Trans. on Communications, Vol. E90-B, No. 7, pp. 1884-1885, Jul. 2007.
We analytically prove that the error in the channel idle time-based collision probability estimation in face of non-saturated stations is bounded by 2/(CWmin +1) in the IEEE 802.11 wireless LANs(WLANs). This work explicitly quantifies the impact of non-saturation, and the result vindicates the use of the estimation technique in real-life IEEE 802.11 WLANs, in such applications as the acknoledgement-based link adpaptation and the throughput optimization through contention window size adaptation.
Wan Yeon Lee, Heejo Lee, "Evaluation of Authentication Internetworking Methods among Multiple WLAN Service Providers", Int'l Journal of Communication Systems, Vol. 20, No. 5, pp. 515-531, May. 2007.
The interworking technologies to combine multiple WLANs into a single virtual system have not been studied extensively, particularly for legacy wireless networks. In this paper, we study how to provide the inter-domain authentication among multiple WLAN service providers with minimum overhead. We introduce five inter-domain authentication methods, referred to as Info-Sharing, AP-Seq, AP-Con, AS-Seq and AS-Con, which are designed in the form of an extension to the standard IEEE 802.1x and EAP protocols. In order to evaluate these methods, we compare their authentication time, implementation cost, confidentiality, flexibility and increment of messages. From the evaluation with analysis and experiments, we show that the AS-Con method can provide the authentication interworking function with minimal overhead on legacy network equipments. Also it is shown that, even though the authentication of AS-Con takes longer than the previous method, their difference is under one second and insensitive to users.
Sangki Yun, Hyogon Kim, Heejo Lee, Inhye Kang, "100+ VoIP Calls on 802.11b: The Power of Combining Voice Frame Aggregation and Uplink-Downlink Bandwidth Control in Wireless LANs", IEEE Journal on Selected Areas in Communications, Vol. 25, No. 4, pp. 689-698, May. 2007.
The bandwidth efficiency of Voice over IP (VoIP) traffic on the IEEE 802.11 WLAN is notoriously low. VoIP over 802.11 incurs high bandwidth cost for voice frame packetization and MAC/PHY framing, which is aggravated by channel access overhead. For instance, 10 calls with the G.729 codec can barely be supported on 802.11b with acceptable QoS – less than 2% efficiency. As WLANs and VoIP services become increasingly widespread, this inefficiency must be overcome. This paper proposes a solution that boosts the efficiency high enough to support a significantly larger number of calls than existing schemes, with fair call quality. The solution comes in two parts: adaptive frame aggregation and uplink/downlink bandwidth equalization. The former reduces the absolute number of MAC frames according to the link congestion level, and the latter balances the bandwidth usage between the access point (AP) and wireless stations. When used in combination, they yield superior performance, for instance, supporting more than 100 VoIP calls over a IEEE 802.11b link. We demonstrate the performance of the proposed approach through extensive simulation, and validate the simulation through analysis.
Hyogon Kim, Sangki Yun, Heejo Lee, "Boosting VoIP Capacity of Wireless Mesh Networks through Lazy Frame Aggregation", IEICE Trans. on Communications, Vol. E90-B, No. 5, pp. 1283-1285, May. 2007.
A novel method of voice frame aggregation for wireless mesh networks is presented. In the method, the degree of aggregation is automatically regulated by the congestion level on the wireless link. On the IEEE 802.11-based mesh network, it is shown to yield approximately twice the call capacity, while incurring no additional delay for frame aggregation.
Hyundo Park, Heejo Lee, Hyogon Kim, "Detecting Unknown Worms using Randomness Check", IEICE Trans. on Communications, Vol. E90-B, No. 4, pp. 894-903, Apr. 2007.
From the introduction of CodeRed and Slammer worms, it has been learned that the early detection of worm epidemics is important in order to reduce the damage resulting from outbreaks. A prominent characteristic of Internet worms is the random selection of subsequent targets. In this paper, we propose a new worm detection mechanism by checking the random distribution of destination addresses in network traffic. The proposed mechanism constructs a matrix from network traffic and checks the rank of the matrix in order to detect the spreading of Internet worms. From the fact that a random binary matrix holds a high rank value, ADUR (Anomaly Detection Using Randomness check) is proposed for detecting unknown worms based on the rank of the matrix. From experiments on various environments, it is demonstrated that the ADUR mechanism effectively detects the spread of new worms in the early stages, even when there is only a single host infected in a monitoring network. Also, we show that ADUR is highly sensitive so that the worm epidemic can be detectable quickly, e.g., three times earlier than the infection of 90% vulnerable hosts.
Hyogon Kim, Heejo Lee, Sangmin Shin, "On the cross-layer impact of TCP ack thinning on IEEE 802.11 wireless MAC dynamics", IEICE Trans. on Communications, Vol. E90-B, No. 2, pp. 412-416, Feb. 2007.
ACK thinning refers to the technique to discard or reduce TCP acknowledgement (ACKs) for the purpose of diverting scarce bandwidth to TCP data traffic. It has been shown that under some circumstances the technique is effective to boost the TCP throughput on wireless links, in particular the IEEE 802.11 wireless LAN (WLAN). In this letter, however, we show that ACK thinning backfires under congestion due to its cross-layer impact on the 802.11 MAC dynamics. With the ACK filtering example, we demonstrate the phenomenon and analyze the cause. Based on the analysis, we show how the IEEE 802.11 contention window size control solves the problem.
Wan Yeon Lee, Heejo Lee, "Energy-efficient Scheduling for Multiprocessors", IEE Electronics Letters, Vol. 42, No. 21, pp. 1200-1201, Oct. 2006.
An energy-efficient scheduling algorithm is proposed for parallel tasks in a multiprocessor system. The proposed algorithm utilises the dynamic voltage scaling (DVS) method for low energy consumption and executes tasks in parallel to compensate for the execution delay induced by the DVS method.
Heejo Lee, Jong Kim, Wanyeon Lee, "Resiliency of Network Topologies under Path-Based Attacks", IEICE Trans. on Communications, Vol. E89-B, No. 10, pp. 2878-2884, Oct. 2006.
Network topology has no direct effect on the correctness of network protocols, however, it influences the performance of networks and their survivability when they are under attack. Recent studies have analyzed the robustness of the Internet in the face of faults or attacks which may cause node failures. However, the effect of link failure or a series of link failures has not been extensively examined, even though such a situation is more likely to occur in the current Internet environment. In this paper, we propose an attack-and-failure graph model and practical techniques for attacking strategies against nodes, edges or paths in order to reflect real-life attack scenarios. The resiliency of Internet topologies is examined under the attacking strategies, with various metrics including path-failure ratio and
Wan Yeon Lee, Heejo Lee, "Optimal Scheduling for Real-Time Parallel Tasks", IEICE Trans. on Information and Systems, Vol. E89-D, No. 6, pp. 1962-1966, Jun. 2006.
We propose an optimal algorithm for the real-time scheduling of parallel tasks on multiprocessors, where the tasks have the properties of flexible preemption, linear speedup, bounded paralleism, and arbitary deadline. The proposed algorithm is optimal in the sense that it always finds out a feasible schdule if one exists. Furthermore, the algorithm delivers the best schedule consuming the fewest processors among feasible schedules. In this letter, we prove the optimality of the proposed algorithm. Also, we show that the time complexity of the algorithm is O(M^2*N^2)in the worst case, where M and N are the number of tasks and the number of processors, respectively.
Heejo Lee, Jong Kim, Sungje Hong, Sunggu Lee, "Processor Allocation and Task Scheduling of Matrix Chain Products on Parallel Systems", IEEE Trans. on Parallel and Distributed Systems, Vol. 14, No. 4, pp. 394-407, Apr. 2003.
Abstract
Heejo Lee, Jong Kim, Sungje Hong, Sunggu Lee, "Task Scheduling using a Block Dependency DAG for Block-Oriented Sparse Cholesky Factorization", Parallel Computing, Vol. 29, No. 1, pp. 135-159, Jan. 2003.
Block-oriented sparse Cholesky factorization decomposes a sparse matrix into rectangular subblocks; each block can then be handled as a computational unit in order to increase data reuse in a hierarchical memory system. Also, the factorization method increases the degree of concurrency and reduces the overall communication volume so that it performs more efficiently on a distributed-memory multiprocessor system than the customary column-oriented factorization method. But until now, mapping of blocks to processors has been designed for load balance with restricted communication patterns. In this paper, we represent tasks using a block dependency DAG that represents the execution behavior of block sparse Cholesky factorization in a distributed-memory system. Since the characteristics of tasks for block Cholesky factorization are different from those of the conventional parallel task model, we propose a new task scheduling algorithm using a block dependency DAG. The proposed algorithm consists of two stages: early-start clustering, and affined cluster mapping (ACM). The early-start clustering stage is used to cluster tasks while preserving the earliest start time of a task without limiting parallelism. After task clustering, the ACM stage allocates clusters to processors considering both communication cost and load balance. Experimental results on a Myrinet cluster system show that the proposed task scheduling approach outperforms other processor mapping methods.
Heejo Lee, Jong Kim, Sungje Hong, "Evaluation of Two Load-Balancing Primary-Backup Process Allocation Schemes", IEICE Trans. on Information and Systems, Vol. E82-D, No. 12, pp. 1535-1544, Dec. 1999.
In this paper, we show two process allocation schemes to tolerate multiple faults when the primary-backup replication method is used. The first scheme, called multiple backup scheme, is running multiple backup processes for each process to tolerate multiple faults. The second scheme, called regenerative backup scheme, is running only one backup process for each process, but re-generates backup processes for processes that do not have a backup process after a fault occurrence to keep the primary-backup process pair available. In both schemes, we propose heuristic process allocation methods for balancing loads in spite of the occurrence of faults. Then we evaluate and compare the performance of the proposed heuristic process allocation methods using simulation. Next, we analyze the reliability of two schemes based on their fault-tolerance capability. For the analysis of fault-tolerance capability, we find the degree of fault tolerance for each scheme. Then we find the reliability of each scheme using Markov chains. The comparison results of two schemes indicate that the regenerative single backup process allocation scheme is more suitable than the multiple backup allocation scheme.
Jong Kim, Heejo Lee, Sunggu Lee, "Replicated Process Allocation for Load Distribution in Fault-Tolerant Multicomputers", IEEE Trans. on Computers, Vol. 46, No. 4, pp. 499-505, Apr. 1997.
In this paper, we consider a load-balancing process allocation method for fault-tolerant multicomputer systems that balances the load before as well as after faults start to degrade the performance of the system. In order to be able to tolerate a single fault, each process (primary process) is duplicated (i.e., has a backup process). The backup process executes on a different processor from the primary, checkpointing the primary process and recovering the process if the primary process fails. In this paper, we formalize the problem of load-balancing process allocation and propose a new process allocation method and analyze the performance of the proposed method. Simulations are used to compare the proposed method with a process allocation method that does not take into account the different load characteristics of the primary and backup processes. While both methods perform well before the occurrence of a fault, only the proposed method maintains a balanced load after the occurrence of such a fault.