Total: 105

Yoonjong Na, Seunghoon Woo, Joomyeong Lee, Heejo Lee, "CNEPS: A Precise Approach for Examining Dependencies among Third-Party C/C++ Open-Source Components", IEEE/ACM Int'l Conf. on Software Engineering (ICSE), Apr. 14. 2024.
The rise in open-source software (OSS) reuse has led to intricate dependencies among third-party components, increasing the demand for precise dependency analysis. However, owing to the presence of reused files that are difficult to identify the originating components (i.e., indistinguishable files) and duplicated components, precisely identifying component dependencies is becoming challenging. In this paper, we present CNEPS, a precise approach for examining dependencies in reused C/C++ OSS components. The key idea of CNEPS is to use a novel granularity called a module, which represents a minimum unit (i.e., set of source files) that can be reused as a library from another project. By examining dependencies based on modules instead of analyzing single reused files, CNEPS can precisely identify dependencies in the target projects, even in the presence of indistinguishable files. To differentiate duplicated components, CNEPS examines the cloned paths and originating projects of each component, enabling precise identification of dependencies associated with them. Experimental results on top 100 C/C++ software show that CNEPS outperforms a state-of-the-art approach by identifying twice as many dependencies. CNEPS could identify 435 dependencies with 89.9% precision and 93.2% recall in less than 10 seconds per application on average, whereas the existing approach hardly achieved 63.5% precision and 42.5% recall.
Seunghoon Woo, Eunjin Choi, Heejo Lee, Hakjoo Oh, "V1SCAN: Discovering 1-day Vulnerabilities in Reused C/C++ Open-source Software Components Using Code Classification Techniques", USENIX Security Symposium, pp. 6541-6556, Aug. 9. 2023.
We present V1SCAN, an effective approach for discovering 1-day vulnerabilities in reused C/C++ open-source software (OSS) components. Reusing third-party OSS has many benefits, but can put the entire software at risk owing to the vulnerabilities they propagate. In mitigation, several techniques for detecting propagated vulnerabilities, which can be classified into version- and code-based approaches, have been proposed. However, state-of-the-art techniques unfortunately produce many false positives or negatives when OSS projects are reused with code modifications. In this paper, we show that these limitations can be addressed by improving version- and code-based approaches and synergistically combining them. By classifying reused code from OSS components, V1SCAN only considers vulnerabilities contained in the target program and filters out unused vulnerable code, thereby reducing false alarms produced by version-based approaches. V1SCAN improves the coverage of code-based approaches by classifying vulnerable code and then detecting vulnerabilities propagated with code changes in various code locations. Evaluation on GitHub popular C/C++ software showed that V1SCAN outperformed state-of-the-art vulnerability detection approaches by discovering 50% more vulnerabilities than they detected. In addition, V1SCAN reduced the false positive rate of the simple integration of existing version- and code-based approaches from 71% to 4% and the false negative rate from 33% to 7%. With V1SCAN, developers can detect propagated vulnerabilities with high accuracy, maintaining a secure software supply chain
Taejun Lee, Heewon Park, Heejo Lee, "AutoMetric: Towards Measuring Open-Source Software Quality Metrics Automatically", The 4th ACM/IEEE International Conference on Automation of Software Test (AST), May. 15. 2023.
These days, a significant portion of open-source software (OSS) is necessary to develop a software. There has been a few measurement that can verify safety of OSS, but technologies for automation are insufficient. To address this problem, we propose AutoMetric, an automatic tool for measuring security metrics of OSS in repository level. Using AutoMetric which only collects repository address of the project, it is possible to inspect many projects at once regardless of its size and scope. AutoMetric contains five metrics: Mean Time to Update (MU), Mean Time to Commit (MC), Number of Contributors (NC), Inactive Period (IP), and Branch Protection (BP). These indicators can be calculated quickly even if the source code changes. By comparing metrics in AutoMetric with 2,675 reported vulnerabilities in GitHub Advisory Database (GAD), the result shows that the more frequent updates and commits and the shorter the inactivity period, the more vulnerabilities were found.
Kyeongseok Yang, Sudharssan Mohan, Yonghwi Kwon, Heejo Lee, Chung Hwan Kim, "Poster: Automated Discovery of Sensor Spoofing Attacks on Robotic Vehicles", Computer and Communications Security (CCS), Nov. 7. 2022.
Robotic vehicles are playing an increasingly important role in our daily life. Unfortunately, attackers have demonstrated various sensor spoofing attacks that interfere with robotic vehicle operations, imposing serious threats. Thus, it is crucial to discover such attacks earlier than attackers so that developers can secure the vehicles. In this paper, we propose a new sensor fuzzing framework SensorFuzz that can systematically discover potential sensor spoofing attacks on robotic vehicles. It generates malicious sensor inputs by formally modeling the existing sensor attacks and leveraging high-fidelity vehicle simulation, and then analyzes the impact of the inputs on the vehicle with a resilience-based feedback mechanism.
Seunghoon Woo, Hyunji Hong, Eunjin Choi, Heejo Lee, "MOVERY: A Precise Approach for Modified Vulnerable Code Clone Discovery from Modified Open-Source Software Components", USENIX Security Symposium, Aug. 10. 2022.
Vulnerabilities inherited from third-party open-source software (OSS) components can compromise the entire software security. However, discovering propagated vulnerable code is challenging as it proliferates with various code syntaxes owing to the OSS modifications, more specifically, internal (e.g., OSS updates) and external modifications of OSS (e.g., code changes that occur during the OSS reuse). In this paper, we present MOVERY, a precise approach for discovering vulnerable code clones (VCCs) from modified OSS components. By considering the oldest vulnerable function and extracting only core vulnerable and patch lines from security patches, MOVERY generates vulnerability and patch signatures that effectively address OSS modifications. For scalability, MOVERY reduces the search space of the target software by focusing only on the codes borrowed from other OSS projects. Finally, MOVERY determines that the function is VCC when it matches the vulnerability signature and is distinctive from the patch signature. When we applied MOVERY on ten popular software selected from diverse domains, we observed that 91% of the discovered VCCs had different code syntax from the disclosed vulnerable function. Nonetheless, MOVERY discovered VCCs at least 2.5 times more than those discovered in existing techniques, with much higher accuracy: MOVERY discovered 415 VCCs with 96% precision and 96% recall, whereas two recent VCC discovery techniques, which hardly consider internal and external OSS modifications, discovered only 163 and 72 VCCs with at most 77% precision and 38% recall.
Haram Park, Carlos Kayembe Nkuba, Seunghoon Woo, Heejo Lee, "L2Fuzz: Discovering Bluetooth L2CAP Vulnerabilities Using Stateful Fuzz Testing", Annual IEEE/IFIP Int'l Conf. on Dependable Systems and Networks (DSN), Jun. 27. 2022.
Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR) is a wireless technology used in billions of devices. Recently, several Bluetooth fuzzing studies have been conducted to detect vulnerabilities in Bluetooth devices, but they fall short of effectively generating malformed packets. In this paper, we propose L2FUZZ, a stateful fuzzer to detect vulnerabilities in Bluetooth BR/EDR Logical Link Control and Adaptation Protocol (L2CAP) layer. By selecting valid commands for each state and mutating only the core fields of packets, L2FUZZ can generate valid malformed packets that are less likely to be rejected by the target device. Our experimental results confirmed that: (1) L2FUZZ generates up to 46 times more malformed packets with a much less packet rejection ratio compared to the existing techniques, and (2) L2FUZZ detected five zero-day vulnerabilities from eight real-world Bluetooth devices.
Jeonghwa Heo, Heewoong Jang, Heejo Lee, "How to divide a permission token in the delegation process of blockchain-based access control for IoT", The 16th Annual IEEE Int'l Systems Conf. (SYSCON), Apr. 25. 2022.
Hyunji Hong, Seunghoon Woo, Heejo Lee, "Dicos: Discovering Insecure Code Snippets from Stack Overflow Posts by Leveraging User Discussions", Annual Computer Security Applications Conf. (ACSAC), pp. 194–206, Dec. 6. 2021.
Online Q&A fora such as Stack Overflow assist developers to solve their faced coding problems. Despite the advantages, Stack Overflow has the potential to provide insecure code snippets that, if reused, can compromise the security of the entire software. We present Dicos, an accurate approach by examining the change history of Stack Overflow posts for discovering insecure code snippets. When a security issue was detected in a post, the insecure code is fixed to be safe through user discussions, leaving a change history. Inspired by this process, Dicos first extracts the change history from the Stack Overflow post, and then analyzes the history whether it contains security patches, by utilizing pre-selected features that can effectively identify security patches. Finally, when such changes are detected, Dicos determines that the code snippet before applying the security patch is insecure. To evaluate Dicos, we collected 1,958,283 Stack Overflow posts tagged with C, C++, and Android. When we applied Dicos on the collected posts, Dicos discovered 12,458 insecure posts (i.e., 14,719 insecure code snippets) from the collected posts with 91% precision and 93% recall. We further confirmed that the latest versions of 151 out of 2,000 popular C/C++ open-source software contain at least one insecure code snippet taken from Stack Overflow, being discovered by Dicos. Our proposed approach, Dicos, can contribute to preventing further propagation of insecure codes and thus creating a safe code reuse environment.
Seunghoon Woo, Dongwook Lee, Sunghan Park, Heejo Lee, Sven Dietrich, "V0Finder: Discovering the Correct Origin of Publicly Reported Software Vulnerabilities", USENIX Security Symposium, pp. 3041-3058, Aug. 11. 2021.
Common Vulnerabilities and Exposures (CVEs) are used to ensure confidence among developers, to share information about software vulnerabilities, and to provide a baseline for security measures. Therefore, the correctness of CVE reports is crucial for detecting and patching software vulnerabilities. In this paper, we introduce the concept of “Vulnerability Zero” (VZ), the software where a vulnerability first originated. We then present V0Finder, a precise mechanism for discovering the VZ of a vulnerability, including software name and its version. V0Finder utilizes code-based analysis to identify reuse relations, which specify the direction of vulnerability propagation, among vulnerable software. V0Finder constructs a graph from all the identified directions and traces backward to the root of that graph to find the VZ. We applied V0Finder to 5,671 CVE vulnerabilities collected from the National Vulnerability Database (NVD) and popular Bugzilla-based projects. V0Finder discovered VZs with high accuracy of 98% precision and 95% recall. Furthermore, V0Finder identified 96 CVEs with incorrect information related to their respective VZs. We confirmed that the incorrect VZ causes prolonged patch updates of vulnerable software; the patch update of CVEs with the incorrect VZ information takes 2 years, while the patch update of CVEs with the correct VZ takes less than a year on average. Such incorrectly identified VZ hinders the objective of the CVE and causes confusion rather than “ensuring confidence” among developers. Our analysis shows that V0Finder can enhance the credibility of information provided by the CVEs.
Hajin Jang, Kyeongseok Yang, Geonwoo Lee, Jeremy D. Seideman, Shoufu Luo, Sven Dietrich, Heejo Lee, "QuickBCC: Quick and Scalable Binary Vulnerable Code Clone Detection", IFIP Int'l Conf. on ICT Systems Security and Privacy Protection (IFIP SEC), pp. 66-82, Jun. 22. 2021.
Due to code reuse among software packages, vulnerabilities can propagate from one software package to another. Current code clone detection techniques are useful for preventing and managing such vulnerability propagation. When the source code for a software package is not available, such as when working with proprietary or custom software distributions, binary code clone detection can be used to examine software for flaws. However, existing binary code clone detectors have scalability issues, or are limited in their accurate detection of vulnerable code clones. In this paper, we introduce QuickBCC, a scalable binary code clone detection framework designed for vulnerability scanning. The framework was built on the idea of extracting semantics from vulnerable binaries both before and after security patches, and comparing them to target binaries. In order to improve performance, we created a signature based on the changes between the pre- and post-patched binaries, and implemented a filtering process when comparing the signatures to the target binaries. In addition, we leverage the smallest semantic unit, a strand, to improve accuracy and robustness against compile environments. QuickBCC is highly optimized, capable of preprocessing 5,439 target binaries within 111 min, and is able to match those binaries against 6 signatures in 23 s when running as a multi-threaded application. QuickBCC takes, on average, 3 ms to match one target binary. Comparing performance to other approaches, we found that it outperformed other approaches in terms of performance when detecting well known vulnerabilities with acceptable level of accuracy.
Junwon Lee, Heejo Lee, "Secure and Scalable IoT: An IoT Network Platform Based on Network Overlay and MAC Security", IFIP Int'l Conf. on ICT Systems Security and Privacy Protection (IFIP SEC), pp. 287-301, Jun. 22. 2021.
IoT, which is closely connected with our daily life, shows high growth in the automotive, healthcare, and retail fields. IoT security threats can cause severe problems in our lives. However, the security of the IoT network is insufficient to cope with security threats. Therefore, an attacker can use man-in-the-middle-attacks (MITM), DNS manipulation, and route tampering for eavesdropping, privacy breach, service outages and delay, power consumption, and system manipulation. Currently, VPN and data encryption is applied to protect the IoT network from these security threats. However, due to the limited resources of IoT device, the TCP/IP-based VPN and encryption are also limited. Although a lightweight IoT communication protocol such as LoWPAN is used, TCP/IP-based VPN such as IPsec, OpenVPN, and Wireguard require bandwidth, CPU/memory, and electric power at the level of general endpoint devices. In this paper, we propose a secure and scalable IoT (SSI) network platform that can prevent security threats while minimizing use of computing resources of an IoT device. SSI, which has a lower load than TCP/IP-based VPN, is a layer 2 VPN and supply data link frame encryption. L2TP and VXLAN are provided for a scalable layer 2 VPN, and the MACsec algorithm encrypts layer 2 frames. SSI shows 30% network speed improvement and 31.6% CPU usage reduction compared to IoT network applied OpenVPN.
Seongkyeong Kwon, Seunghoon Woo, Gangmo Seong, Heejo Lee, "OCTOPOCS: Automatic Verification of Propagated Vulnerable Code Using Reformed Proofs of Concept", Annual IEEE/IFIP Int'l Conf. on Dependable Systems and Networks (DSN), pp. 174-185, Jun. 21. 2021.
Addressing vulnerability propagation has become a major issue in software ecosystems. Existing approaches hold the promise of detecting widespread vulnerabilities but cannot be applied to verify effectively whether propagated vulnerable code still poses threats. We present OCTOPOCS, which uses a reformed Proof-of-Concept (PoC), to verify whether a vulnerability is propagated. Using context-aware taint analysis, OCTOPOCS extracts crash primitives (the parts used in the shared code area between the original vulnerable software and propagated software) from the original PoC. OCTOPOCS then utilizes directed symbolic execution to generate guiding inputs that direct the execution of the propagated software from the entry point to the shared code area. Thereafter, OCTOPOCS creates a new PoC by combining crash primitives and guiding inputs. It finally verifies the propagated vulnerability using the created PoC. We evaluated OCTOPOCS with 15 real-world C and C++ vulnerable software pairs, with results showing that OCTOPOCS successfully verified 14 propagated vulnerabilities.
Seunghoon Woo, Sunghan Park, Seulbae Kim, Hakjoo Oh, Heejo Lee, "CENTRIS: A Precise and Scalable Approach for Identifying Modified Open-Source Software Reuse", IEEE/ACM Int'l Conf. on Software Engineering (ICSE), pp. 860-872, May. 22. 2021.
Open-source software (OSS) is widely reused as it provides convenience and efficiency in software development. Despite evident benefits, unmanaged OSS components can introduce threats, such as vulnerability propagation and license violation. Unfortunately, however, identifying reused OSS components is a challenge as the reused OSS is predominantly modified and nested. In this paper, we propose CENTRIS, a precise and scalable approach for identifying modified OSS reuse. By segmenting an OSS code base and detecting the reuse of a unique part of the OSS only, CENTRIS is capable of precisely identifying modified OSS reuse in the presence of nested OSS components. For scalability, CENTRIS eliminates redundant code comparisons and accelerates the search using hash functions. When we applied CENTRIS on 10,241 widely-employed GitHub projects, comprising 229,326 versions and 80 billion lines of code, we observed that modified OSS reuse is a norm in software development, occurring 20 times more frequently than exact reuse. Nonetheless, CENTRIS identified reused OSS components with 91% precision and 94% recall in less than a minute per application on average, whereas a recent clone detection technique, which does not take into account modified and nested OSS reuse, hardly reached 10% precision and 40% recall.
Yoonjong Na, Yejin Joo, Heejo Lee, Xiangchen Zhao, Kurian Karyakulam Sajan, Gowri Ramachandran, Bhaskar Krishnamacha, "Enhancing the Reliability of IoT Data Marketplaces through Security Validation of IoT Devices", Int'l Conf. on Distributed Computing in Sensor Systems (DCOSS), May. 27. 2020.
IoT data marketplaces are being developed to help cities and communities create large scale IoT applications. Such data marketplaces let the IoT device owners sell their data to the application developers. Following this application development model, the application developers need not deploy their own IoT devices when developing IoT applications; instead, they can buy data from a data marketplace. In a marketplace-based IoT application, the application developers are making critical business and operation decisions using the data produced by seller’s IoT devices. Under these circumstances, it is crucial to verify and validate the security of IoT devices.In this paper, we assess the security of IoT data marketplaces. In particular, we discuss what kind of vulnerabilities exist in IoT data marketplaces using the well-known STRIDE model, and present a security assessment and certification framework for IoT data marketplaces to help the device owners to examine the security vulnerabilities of their devices. Most importantly, our solution certifies the IoT devices when they connect to the data marketplace, which helps the application developers to make an informed decision when buying and consuming data from a data marketplace. To demonstrate the effectiveness of the proposed approach, we have developed a proof-of-concept using I3 (Intelligent IoT Integrator), which is an open-source IoT data marketplace developed at the University of Southern California, and IoTcube, which is a vulnerability detection toolkit developed by researchers at Korea University. Through this work, we show that it is possible to increase the reliability of a IoT data marketplace while not damaging the convenience of the users.
Sunbeom So, Myungho Lee, Jisu Park, Heejo Lee, Hakjoo Oh, "VeriSmart: A Highly Precise Safety Verifier for Ethereum Smart Contracts", IEEE Symposium on Security and Privacy, May. 18. 2020.
We present VERISMART, a highly precise verifier for ensuring arithmetic safety of Ethereum smart contracts. Writing safe smart contracts without unintended behavior is critically important because smart contracts are immutable and even a single flaw can cause huge financial damage. In particular, ensuring that arithmetic operations are safe is one of the most important and common security concerns of Ethereum smart contracts nowadays. In response, several safety analyzers have been proposed over the past few years, but state-of-the-art is still unsatisfactory; no existing tools achieve high precision and recall at the same time, inherently limited to producing annoying false alarms or missing critical bugs. By contrast, VERISMART aims for an uncompromising analyzer that performs exhaustive verification without compromising precision or scalability, thereby greatly reducing the burden of manually checking undiscovered or incorrectly-reported issues. To achieve this goal, we present a new domain-specific algorithm for verifying smart contracts, which is able to automatically discover and leverage transaction invariants that are essential for precisely analyzing smart contracts. Evaluation with real-world smart contracts shows that VERISMART can detect all arithmetic bugs with a negligible number of false alarms, far outperforming existing analyzers.
Choongin Lee, Jeonghan Bae, Heejo Lee, "PRETT: Protocol Reverse Engineering Using Binary Tokens and Network Traces", IFIP Int'l Conf. on Information Security and Privacy Protection (IFIP SEC), pp. 141-155, Sep. 20. 2018.
Protocol reverse engineering is the process of extracting application-level protocol specifications. The specifications are a useful source of knowledge about network protocols and can be used for various purposes. Despite the successful results of prior works, their methods primarily result in the inference of a limited number of message types. We herein propose a novel approach that infers a minimized state machine while having a rich amount of information. The combined input of tokens extracted from the network protocol binary executables and network traces enables the inference of new message types and protocol behaviors which had not been found in previous works. In addition, we propose a state minimization algorithm that can be applied to real-time black-box inference. The experimental results show that our approach can infer the largest number of message types for file-transfer protocol (FTP) and simple mail-transfer protocol (SMTP) compared to eight prior arts. Moreover, we found unexpected behaviors in two protocol implementations using the inferred state machines.
Sangwoo Kim, Seokmyung Hong, Jaesang Oh, Heejo Lee, "Obfuscated VBA Macro Detection Using Machine Learning", IEEE/IFIP Int'l Conf. on Dependable Systems and Networks, pp. 490-501, Jun. 28. 2018.
Malware using document files as an attack vector has continued to increase and now constitutes a large portion of phishing attacks. To avoid anti-virus detection, malware writers usually implement obfuscation techniques in their source code. Although obfuscation is related to malicious code detection, little research has been conducted on obfuscation with regards to Visual Basic for Applications (VBA) macros. In this paper, we summarize the obfuscation techniques and propose an obfuscated macro code detection method using five machine learning classifiers. To train these classifiers, our proposed method uses 15 discriminant static features, taking into account the characteristics of the VBA macros. We evaluated our approach using a real-world dataset of obfuscated and non-obfuscated VBA macros extracted from Microsoft Office document files. The experimental results demonstrate that our detection approach achieved a F2 score improvement of greater than 23% compared to those of related studies.
Youngbin Pyo, Choongin Lee, Heejo Lee, "Visual Analysis of Corrupted Video Data in Video Event Data Recorders", IEEE Conference on Dependable and Secure Computing, Aug. 10. 2017.
With the rapid proliferation of video event data recorders (VEDRs), video file data from VEDRs are often used as the primary evidence in many fields, such as law enforcement. In this paper, we propose a method for reconstructing corrupted video files and capturing key events recorded in the video file for use as valid evidence. The method first extracts image features from each video frame and constructs a multidimensional vector. Subsequently, dimension reduction of these vectors is performed for visualization in low-dimensional space. The proper sequence of the video frames is restored by using a curve fitting technique for the low-dimensional vectors. Then, we calculate the change in the slope of the curve-fitted model to detect key events in video files. The proposed method generates significant results not provided by existing file recovery techniques.
Seulbae Kim, Seunghoon Woo, Heejo Lee, Hakjoo Oh, "VUDDY: A Scalable Approach for Vulnerable Code Clone Discovery", IEEE Symposium on Security and Privacy, May. 22. 2017.
The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find "vulnerable" clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.
Seulbae Kim, Seunghoon Woo, Heejo Lee, Hakjoo Oh, "Poster: IoTcube: An Automated Analysis Platform for Finding Security Vulnerabilities", IEEE Symposium on Security and Privacy, May. 22. 2017.
Although the quantity of services and devices regarding the Internet of Things (IoT) is consistently increasing, not many people are aware that software vulnerabilities are also proliferating at an alarming rate along with the spread of IoT. In addition, for people without security backgrounds, defending their devices against these vulnerabilities is also a huge challenge. IoTcube, an automated analysis platform for finding security vulnerabilities in the IoT devices, is developed to be a guidance system for any people with or without security expertise.
John Milburn, Heejo Lee, "FassKey: A secure and convenient authentication system", IEEE NetSoft, pp. 489-495, Jun. 30. 2016.
We present an identity authentication system which is cryptographically strong, pseudonymous, efficient enough to run well on current smart phone devices, and easily extensible for payment and banking functionality. We also describe high security algorithm and programming methods for the implementation, including server, network transmission, and application development. The system is intellectual property unencumbered and provably secure. The end user app implementation uses three factor security, a combination of unique device, user password, and fingerprint. We use well-known and proven cryptographic primitives.
Hongzhe Li, Jaesang Oh, Hakjoo Oh, Heejo Lee, "Automated Source Code Instrumentation for Verifying Potential Vulnerabilities", IFIP Int'l Conf. on ICT Systems Security and Privacy Protection (IFIP SEC), Vol. 471, pp. 211-226, May. 11. 2016.
With a rapid yearly growth rate, software vulnerabilities are making great threats to the system safety. In theory, detecting and removing vulnerabilities before the code gets ever deployed can greatly ensure the quality of software released. However, due to the enormous amount of code being developed as well as the lack of human resource and expertise, severe vulnerabilities still remain concealed or cannot be revealed effectively. Current source code auditing tools for vulnerability discovery either generate too many false positives or require overwhelming manual efforts to report actual software flaws. In this paper, we propose an automatic verification mechanism to discover and verify vulnerabilities by using program source instrumentation and concolic testing. In the beginning, we leverage CIL to statically analyze the source code including extracting the program CFG, locating the security sinks and backward tracing the sensitive variables. Subsequently, we perform automated program instrumentation to insert security probes ready for the vulnerability verification. Finally, the instrumented program source is passed to the concolic testing engine to verify and report the existence of an actual vulnerability. We demonstrate the efficacy and efficiency of our mechanism by implementing a prototype system and perform experiments with nearly 4000 test cases from Juliet Test Suite. The results show that our system can verify over 90% of test cases and it reports buffer overflow flaws with Precision=100% (0 FP) and Recall=94.91%. In order to prove the practicability of our system working in real world programs, we also apply our system on 2 popular Linux utilities, Bash and Cpio. As a result, our system finds and verifies vulnerabilities in a fully automatic way with no false positives.
Hyuckmin Kwon, Seulbae Kim, Heejo Lee, "SIGMATA: Storage Integrity Guaranteeing Mechanism Against Tampering Attempts for Video Event Data Recorders", Int'l Multi-Conf. on Complexity, Informatics and Cybernetics (IMCIC), Mar. 8. 2016.
The usage and market size of video event data recorders (VEDRs), also known as car black boxes, are rapidly increasing. Since VEDRs can provide more visual information about car accident situations than any other device that is currently used for accident investigations (e.g., closed-circuit television), the integrity of the VEDR contents is important to any meaningful investigation. Researchers have focused on the file system integrity or photographic approaches to integrity verification. However, unlike other general data, the video data in VEDRs exhibit a unique I/O behavior in that the videos are stored chronologically. In addition, the owners of VEDRs can manipulate unfavorable scenes after accidents to conceal their recorded behavior. Since prior arts do not consider the time relationship between the frames and fail to discover frame-wise forgery, a more detailed integrity assurance is required. In this paper, we focus on the development of a frame-wise forgery detection mechanism that resolves the limitations of previous mechanisms. We introduce SIGMATA, a novel storage integrity guaranteeing mechanism against tampering attempts for VEDRs. We describe its operation, demonstrate its effectiveness for detecting possible frame-wise forgery, and compare it with existing mechanisms. The result shows that the existing mechanisms fail to detect any frame-wise forgery, while our mechanism thoroughly detects every frame-wise forgery. We also evaluate its computational overhead using real VEDR videos. The results show that SIGMATA indeed discovers frame-wise forgery attacks effectively and efficiently, with the encoding overhead less than 1.5 milliseconds per frame.
Choongin Lee, Jehyun Lee, Sangwook Lee, Heejo Lee, "Integrity Verification of Video Files in a Video Event Data Recorder", Int'l Conf. on Internet (ICONI), Dec. 14. 2015.
Most surveillance camera systems are not equipped with proper built-in integrity protection functions. Consequently, manually determining whether digital contents have been falsified is becoming extremely difficult for investigators. Hence, systematic approaches to forensic integrity verification are essential for ascertaining truth or falsehood. To this end, we propose an integrity verification method that utilizes the structure of video files in a Video Event Data Recorder (VEDR). The proposed method finds the difference in frame index fields between a forged file and an original file. Experiments conducted using commercial VEDRs and video files forged by a video editing tool demonstrate that the proposed integrity verification method is able to detect the broken integrity of video files.
Aapo Kalliola, Kiryong Lee, Heejo Lee, Tuomas Aura, "Flooding DDoS Mitigation and Traffic Management with Software Defined Networking", IEEE CloudNet, Oct. 8. 2015.
Mitigating distributed denial-of-service attacks can be a complex task due to the wide range of attack types, attacker adaptation, and defender constraints. We propose a defense mechanism which is largely automated and can be implemented on current software defined networking (SDN) -enabled networks. Our mechanism combines normal traffic learning, external blacklist information, and elastic capacity invocation in order to provide effective load control, filtering and service elasticity during an attack. We implement the mechanism and analyze its performance on a physical SDN testbed using a comprehensive set of real-life normal traffic traces and synthetic attack traces. The results indicate that the mechanism is effective in maintaining roughly 50% to 80% service levels even when hit by an overwhelming attack.
Hongzhe Li, Hyuckmin Kwon, Jonghoon Kwon, Heejo Lee, "A Scalable Approach for Vulnerability Discovery Based on Security Patches", Applications and Techniques in Information Security (ATIS), pp. 109-122, Nov. 28. 2014.
Software vulnerability has long been considered an important threat to the system safety. A vulnerability often gets reproduced due to the frequent code reuse by programmers. Security patches are often not propagated to all code clones, however they could be leveraged to discover unknown vulnerabilities. Static auditing approaches are frequently proposed to scan code for security flaws, unfortunately, they often generate too many false positives. While dynamic execution analysis can precisely report vulnerabilities, they are in effective in path exploration which limits them to scale to large programs. In this paper, we propose a scalable approach to discover vulnerabilities in real world programs based on released security patches. We use a fast and scalable syntax-based way to find code clones and then, we verify the code clones using concolic testing to dramatically decrease the false positives. Besides, we mitigate the path explosion problem by backward data tracing in concolic execution. We conducted experiments with real world open source projects (Linux Ubuntu OS distributions and program packages) and we reported 7 real vulnerabilities out of 63 code clones found in Ubuntu 14.04 LTS. In one step further, we have confirmed more code clone vulnerabilities in various versions of programs including Apache and Rsyslog. Meanwhile, we also tested the effectiveness of vulnerability verification with test cases from Juliet Test Suite. The result showed that our verification method achieved 98% accuracy with 0 false positives.
Jonghoon Kwon, Jihwan Jeong, Jehyun Lee, Heejo Lee, "DroidGraph: Discovering Android Malware by Analyzing Semantic Behavior", IEEE Conf. on Communication and Network Security, pp. 345-346, Oct. 30. 2014.
Mobile malware has been recently recognized as a significant problem in accordance with the rapid growth of the market share for smartphones. Despite of the numerous efforts to thwart the growth of mobile malware, the number of mobile malware is getting increased by evolving themselves. By applying, for example, code obfuscation or junk code insertion, mobile malware is able to manipulate its appearance while maintains the same functionality, thus mobile malware can easily evade the existing anti-mobile-malware solutions. In this paper, we focus on Android malware and propose a new method called DroidGraph to discover the evolved Android malware. DroidGraph leverages the semantics of Android malware. More precisely, we transform an APK file for Android malware to hierarchical behavior graphs that represent with 136 identical nodes based on the semantics of Android API calls. Then, we select unique behavior graphs as semantic signatures describing common behaviors for Android malware. In evaluation, DroidGraph shows approximately 87% of detection accuracy with only 40 semantic signatures against 260 real-world Android malware, and no false positives for 3,623 benign applications.
Jihwan Jeong, Dongwon Seo, Chanyoung Lee, Jonghoon Kwon, Heejo Lee, John Milburn, "MysteryChecker: Unpredictable Attestation to Detect Repackaged Malicious Applications in Android", IEEE MALWARE, pp. 50-57, Oct. 29. 2014.
The number of malicious applications, sometimes known as malapps, in Android smartphones has increased significantly in recent years. Malapp writers abuse repackaging techniques to rebuild applications with code changes. Existing anti-malware applications do not successfully defeat or defend against the repackaged malapps due to numerous variants. Software-based attestation approaches widely used in a resource-constrained environment have been developed to detect code changes of software with low resource consumption. In this paper, we propose a novel software-based attestation approach, called MysteryChecker, leveraging an unpredictable attestation algorithm. For its unpredictable attestation, MysteryChecker applies the concept of code obfuscation, which changes the syntax in order to avoid code analysis by adversaries. More precisely, unpredictable attestation is achieved by chaining randomly selected crypto functions. A verifier sends a randomly generated attestation module, and the target application must reply with a correct response using the attestation module. Also, the target application periodically receives a new module that contains a different attestation algorithm. Thus, even if the attacker analyzes the attestation module, the target application replaces the existing attestation module with a new one and the analysis done by the attacker becomes invalid. Experimental results show that MysteryChecker is completely able to detect known and unknown variants of repackaged malapps, while existing anti-malware applications only partially detect the variants.
Jonghoon Kwon, Jeongsik Kim, Jehyun Lee, Heejo Lee, Adrian Perrig, "PsyBoG: Power Spectral Density Analysis for Detecting Botnet Groups", IEEE MALWARE, pp. 85-92, Oct. 29. 2014.
Botnets are widely used for acquiring economic profits, by launching attacks such as distributed denial-of-service (DDoS), identification theft, ad-ware installation, mass spamming, and click frauds. Many approaches have been proposed to detect botnet, which rely on end-host installations or operate on network traffic with deep packet inspection. They have limitations for detecting botnets which use evasion techniques such as packet encryption, fast flux, dynamic DNS and DGA. Sporadic botnet behavior caused by disconnecting the power of system or botnet’s own nature also brings unignorable false detection. Furthermore, normal user’s traffic causes a lot of false alarms. In this paper, we propose a novel approach called PsyBoG to detect botnets by capturing periodic activities. PsyBoG leverages signal processing techniques, PSD (Power Spectral Density) analysis, to discover the major frequencies from the periodic DNS queries of botnets. The PSD analysis allows us to detect sophisticated botnets irrespective of their evasion techniques, sporadic behavior and even the noise traffic generated by normal users. To evaluate PsyBoG, we utilize the real-world DNS traces collected from a /16 campus network including more than 48,046K queries, 34K distinct IP addresses and 146K domains. Finally, PsyBoG caught 19 unknown and 6 known botnet groups with 0.1% false positives.
Ibnu Mubarok, Kiryong Lee, Sihyung Lee, Heejo Lee, "Lightweight Resource Management for DDoS Traffic Isolation in a Cloud Environment", IFIP Int'l Conf. on ICT Systems Security and Privacy Protection (IFIP SEC), pp. 44-51, Jun. 2. 2014.
Distributed denial-of-service (DDoS) attacks are one of the most difficult issues in network security and communications. This paper is a part of research project that applies distributed defense against distributed attacks. The aim of this project is to provide services by distributing load from one main server to an infrastructure of cloud-based replicas. This paper proposes a lightweight resource management for DDoS traffic isolation in cloud environments. Experimental results show that our mechanism is a viable approach for dynamic resource scaling under high traffic with distributed resource location.
Hyok An, Heejo Lee, Adrian Perrig, "UAS: Universal Anti-Spoofing by Incorporating Existing Mechanisms", IEEE Conf. on Local Computer Networks (LCN), pp. 448-451, Oct. 22. 2013.
IP spoofing is attractive to amplify network attacks and to provide anonymity. Many approaches have to prevent IP spoofing attacks; however, they do not address a significant deployment issue: filtering inefficiency caused by lack of incentives for early adopters. Practically, no mechanism has been widely deployed and none successfully blocks IP spoofing attacks. We propose a universal anti-spoofing (UAS) mechanism that incorporates existing mechanisms to thwart IP spoofing attacks. In the proposed mechanism, intermediate routers utilize any existing anti-spoofing mechanism that ascertains whether a packet is spoofed or not, and inscribes this information in the packet header. The edge routers at a victim network can estimate the forgery of a packet based on the information sent by the upstream routers. The results of experiments conducted with Internet topologies indicate that UAS reduces false alarms up to 84.5% compared to cases where each mechanism operates separately. Our evaluation shows that incorporating multiple anti-spoofing mechanisms reduces false alarms significantly.
Munkhbayar Bat-Erdene, Taebeom Kim, Hongzhe Li, Heejo Lee, "Dynamic Classification of Packing Algorithms for Inspecting Executables using Entropy Analysis", IEEE MALWARE, pp. 19-26, Oct. 22. 2013.
Packing is widely used for bypassing anti-malware systems, and the proportion of packed malware has been growing rapidly, making up over 80% of malware. Few studies on detecting packing algorithms have been conducted during last two decades. In this paper, we propose a method to classify packing algorithms of given packed executables. First, we convert entropy values of the packed executables loaded in memory into symbolic representations. Our proposed method uses SAX (Symbolic Aggregate Approximation) which is known to be good at large data conversion. Due to its advantage of simplifying complicated patterns, symbolic representation is commonly used in bio-informatics and data mining fields. Second, we classify the distribution of symbols using supervised learning classifications, i.e., Naive Bayes and Support Vector Machines. Results of our experiments with a collection of 466 programs and 15 packing algorithms demonstrated that our method can identify packing algorithms of given executables with a high accuracy of 94.2%, recall of 94.7% and precision of 92.7%. It has been confirmed that packing algorithms can be identified using entropy analysis, which is a measure of uncertainty of running executables, without a prior knowledge of the executable.
Hongzhe Li, Taebeom Kim, Munkhbayar Bat-Erdene, Heejo Lee, "Software Vulnerability Detection using Backward Trace Analysis and Symbolic Execution", Int'l Workshop on Secure Software Engineering (SecSE), pp. 446-454, Sep. 3. 2013.
Software vulnerability has long been considered an important threat to the safety of software systems. When source code is accessible, we can get much help from the information of source code to detect vulnerabilities. Static analysis has been used frequently to scan code for errors that cause security problems when source code is available. However, they often generate many false positives. Symbolic execution has also been proposed to detect vulnerabilities and has shown good performance in some researches. However, they are either ineffective in path exploration or could not scale well to large programs. During practical use, since most of paths are actually not related to security problems and software vulnerabilities are usually caused by the improper use of security-sensitive functions, the number of paths could be reduced by tracing sensitive data backwardly from security-sensitive functions so as to consider paths related to vulnerabilities only. What’s more, in order to leave ourselves free from generating bug triggering test input, formal reasoning could be used by solving certain program conditions. In this research, we propose backward trace analysis and symbolic execution to detect vulnerabilities from source code. We first find out all the hot spot in source code file. Based on each hot spot, we construct a data flow tree so that we can get the possible execution traces. Afterwards, we do symbolic execution to generate program constraint(PC) and get security constraint(SC) from our predefined security requirements along each execution trace. A program constraint is a constraint imposed by program logic on program variables. A security constraint(SC) is a constraint on program variables that must be satisfied to ensure system security. Finally, this hot spot will be reported as a vulnerability if there is an assignment of values to program inputs which could satisfy PC but violates SC, in other words, satisfy PC ^ SC. We have implemented our approach and conducted experiments on test cases which we randomly choose from Juliet Test Suites provided by US National Security Agency(NSA). The results show that our approach achieves Precision value of 83.33% , Recall value of 90.90% and F1 V alue of 86.95% which gains the best performance among competing tools. Moreover, our approach can efficiently mitigate path explosion problem in traditional symbolic execution.
Rashad Aliyev, Dongwon Seo, Heejo Lee, "DROP-FAST: Defending against DDoS Attacks using Cloud Technology", Int'l Conf. on Security and Management (SAM), Jul. 24. 2013.
DDoS attacks continue to be a major threat to network security. Several new types of attacks such as Layer- 7 attacks (e.g., HTTP flood, Slowloris, RUDY, etc.) have emerged. We propose a novel DDoS defense mechanism called DROP-FAST. Our mechanism provides distributed DDoS defense utilizing multiple replicas of the protected server throughout a cloud infrastructure. DROP-FAST is dynamic and can adapt by controlling the number of replicas on cloud based on attack strength. Main server is isolated from network using replica servers. Service quality features such as response time, incoming traffic load, and load sharing are improved due to distribution of attack and replication of the main server throughout the cloud. We describe our mechanism in detail and discuss improvements made over previously existing related works. We set up an experiment that shows significant improvement of the traffic load on the main server as a result of utilizing DROP-FAST mechanism.
Heesung Do, Peter Choi, Heejo Lee, "Dynamic Surveillance: A Case Study with Enron Email Data Set", Int'l Workshop on Information Security Applications (WISA), pp. 81-99, Jul. 19. 2013.
Surveillance is a critical measure to break anonymity. While surveillance with unlimited resources is often assumed as a means, against which, to design stronger anonymity algorithms, this paper addresses the general impact of limited resource on surveillance e ciency. The general impact of limited resource on identifying a hidden group is experimentally studied; the task of identication is only done by following communications between suspects, i.e., the information of whos talking to whom. The surveillance uses simple but intuitive algorithms to return more intelligence with limited resource. The surveillance subject used in this work is the publicly available Enron email data set, an actual trace of human interaction. The initial expectation was that, even with limited resource, intuitive surveillance algorithms would return the higher intelligence than a random approach by exploiting the general properties of power law-style communication map. To the contrary, the impact of limited resource was found large to the extent that intuitive algorithms do not return signicantly higher intelligence than a random approach.
Suyeon Lee, Jehyun Lee, Heejo Lee, "Screening Smartphone Applications using Behavioral Signatures", IFIP Int'l Conf. on ICT Systems Security and Privacy Protection (IFIP SEC), Vol. 405, pp. 14-27, Jul. 8. 2013.
The sharp increase of smartphone malwares has become one of the most serious security problems. The most significant part of the growth is the variants of existing malwares. A legacy approach for malware, the signature matching, is efficient in temporal dimension, but it is not practical because of its lack of robustness against the variants. A counter approach, the behavior analysis to handle the variant issue, takes too much time and resources. We propose a variant detection mechanism using runtime semantic signature. Our key idea is to reduce the control and data flow analysis overhead by using binary patterns for the control and data flow of critical actions as a signature. The flow information is a significant part of behavior analysis but takes high analysis overhead. In contrast to the previous behavioral signatures, the runtime semantic signature has higher family classification accuracy without the flow analysis overhead, because the binary patterns of flow parts is hardly shared by the out of family members. Using the proposed signature, we detect the new variants of known malwares by static matching efficiently and accurately. We evaluated our mechanism with 1,759 randomly collected real-world Android applications including 79 variants of 4 malware families. As the experimental result, our mechanism showed 99.89% of accuracy on variant detection. We also showed that the mechanism has a linear time complexity as the number of target applications. It is fully practical and advanced performance than the previous works in both of accuracy and efficiency.
Jungin Kang, Sangwook Lee, Heejo Lee, "A Digital Forensic Framework for Automated User Activity Reconstruction", Int'l Conf. on Information Security Practice and Experience (ISPEC), pp. 263-277, May. 13. 2013.
User activity reconstruction is a technique used in digital forensic investigation. Using this technique, digital forensic investigators extract a list of user activities from digital artifacts con?scated at the crime scene. Based on the list, explicit knowledge about the crime, such as motive, method, time, and place, can be deduced. Until now, activity reconstruction has been conducted by manual analysis. This means that the domain of the reconstructed activities is limited to the personal knowledge of the investigators, so the result exhibits low accuracy due to human errors , and the process requires an excessive amount of time. To solve these problems, this paper proposes a digital forensic frameworkSigDi? for automated user activity reconstruction. This framework uses a signature-based approach. It comprises an activity signature generation module, signature database, digital artifact collection module, and activity reconstruction module. Using SigDi?, the process of user activity reconstruction can be performed accurately with a high retrieval rate and in a reduced time span.
Haemin Park, Dongwon Seo, Heejo Lee, Adrian Perrig, "SMATT: Smart Meter ATTestation Using Multiple Target Selection and Copy-Proof Memory", Int'l Conf. on Computer Science and its Applications (CSA), Vol. 203, pp. 875-887, Nov. 22. 2012.
A smart grid is verging on a promising technology to reform the world’s electrical grids. Currently, attackers compromise security and privacy by maliciously modifying the memory of smart grid devices. Software-based attestation protocols establish the absence of malicious changes by the attacker. A verifier and a target device locally generate their own checksums by memory traversal, and the verifier attests the target device by comparing the checksums. In smart grids, however, two challenges are arising to deploy the attestation protocol in practice: verification overhead by large-scale networks and evasion of attestation by memory replication. In this paper, we propose a novel software-based attestation technique, termed SMATT (Smart Meter ATTestation), to address the aforementioned two challenges by leveraging multiple target selection and copy-proof memory. A verifier randomly selects multiple smart meters, and receives checksums from them. The verifier only compares the received checksums instead of performing memory traversal, and remarkably reduces checksum computation overhead. To prevent memory replication, we design a customized copy-proof memory mechanism. The smart meter outputs garbage values when copy-proof memory sections are being accessed, and thus, attackers cannot replicate the memory. Furthermore, we define an SI epidemic model considering two attestation parameters, the number of infectious smart meters and the number of selected smart meters by a verifier, to enhance the malware detection accuracy of SMATT. In our experimental environments, SMATT takes only 20 seconds for a verifier to attest millions of smart meters. In addition, SMATT detects malware with over 90% probability, if malware tampers with 5% of memory.
Jonghoon Kwon, Heejo Lee, "BinGraph: Discovering Mutant Malware using Hierarchical Semantic Signatures", IEEE MALWARE, pp. 104-111, Oct. 17. 2012.
Malware landscape has been dramatically elevated over the last decade. The main reason of the increase is that new malware variants can be produced easily using simple code obfuscation techniques. Once the obfuscation is applied, the malware can change their syntactics while preserving semantics, and bypass anti-virus (AV) scanners. Malware authors, thus, commonly use the code obfuscation techniques to generate metamorphic malware. Nevertheless, signature based AV techniques are limited to detect the metamorphic malware since they are commonly based on the syntactic signature matching. In this paper, we propose BinGraph, a new mechanism that accurately discovers metamorphic malware. BinGraph leverages the semantics of malware, since the mutant malware is able to manipulate their syntax only. To this end, we first extract API calls from malware and convert to a hierarchical behavior graph that represents with identical 128 nodes based on the semantics. Later, we extract unique subgraphs from the hierarchical behavior graphs as semantic signatures representing common behaviors of a specific malware family. To evaluate BinGraph, we analyzed a total of 827 malware samples that consist of 10 malware families with 1,202 benign binaries. Among the malware, 20% samples randomly chosen from each malware family were used for extracting semantic signatures, and rest of them were used for assessing detection accuracy. Finally, only 32 subgraphs were selected as the semantic signatures. BinGraph discovered malware variants with 98% of detection accuracy.
Hyundo Park, Sung-Oh David Jung, Heejo Lee, Hoh Peter In, "Cyber Weather Forecasting: Forecasting Unknown Internet Worms Using Randomness Analysis", IEEE Int'l Conf. on Communications (IEEE ICC), pp. 376-387, Jun. 4. 2012.
Since early responses are crucial to reduce the damage from unknown Internet attacks, our first consideration while developing a defense mechanism can be on time efficiency and observing (and predicting) the change of network statuses, even at the sacrifice of accuracy. In the recent security field, it is an earnest desire that a new mechanism to predict unknown future Internet attacks needs to be developed. This motivates us to study forecasting toward future Internet atacks, which is referred to as CWF (Cyber Weather Forecasting). In this paper, in order to show that the principle of CWF can be realized in the real-world, we propose a forecasting mechanism called FORE (FOrecasting using REgression analysis) through the real-time analysis of the randomness in the network traffic. FORE responds against unknown worms 1.8 times faster than the early detection mechanism, named ADUR (Anomaly Detection Using Randomness check), that can detect the worm when only one percent of total number of vulnerable hosts are infected. Furthermore, FORE can give us timely information about the process of the change of the current network situation. Evaluation results demonstrate the prediction efficiency of the proposed mechanism, including the ability to predict worm behaviors starting from 0.03 percent infection. To our best knowledge, this is the first study to achieve the prediction of future Internet attacks.
Hyundo Park, Indra Widjaja, Heejo Lee, "Detection of Cache Pollution Attacks Using Randomness Checks", IFIP Int'l Conf. on ICT Systems Security and Privacy Protection (IFIP SEC), pp. 1096-1100, Jun. 1. 2012.
The Internet plays an increasing role in content dissemination as user-generated contents have exploded recently. Cache servers have been deployed to bypass bottlenecks in the network so that contents can be delivered to end users more efficiently. With caches becoming more embedded in the networks, emerging threats follow naturally. A cache pollution attack is one of the most serious threats on caching networks including the current Internet and emerging caching networks such as Content Centric Networking (CCN). In this paper, we propose a detection approach against cache pollution attacks using randomness checks of a matrix. We apply an effective filtering approach and a statistical sequential analysis for detecting low-rate attacks. The results of our experiments show that our approach can detect a cache pollution attack with attack rate of only a few percent of the overall rate.
Taebeom Kim, Haemin Park, Hyunchul Jung, Heejo Lee, "Online Detection of Fake Access Points using Received Signal Strengths", IEEE Vehicular Technology Conf. (IEEE VTC), May. 6. 2012.
Wireless access points (APs) are widely used for the convenience and productivity of smartphone users. The growing popularity of wireless local area networks (WLANs) increases the risk of wireless security attacks. A fake AP can be set in any public spaces in order to impersonate legitimate APs for monetization. Existing fake AP detection methods analyze wireless traffic by using extra devices, and the traffic is collected by servers. However, using these server-side methods is costly and only provide secure communication, in limited places of clients’ devices. Recently, several fake AP detection methods have been designed in order to overcome the server-side problems in a client-side. However, there are two limitations to the client-side methods: cumbersome processes and limited resources. When the methods attempt to collect data, calculating interval time incurs time-consuming processes to detect fake characteristics in the client-side. Moreover, the operating systems in smartphones provide limited resources that can hardly be adopted in the client-side. In this paper, we propose a novel fake AP detection method to solve the aforementioned problems in the client-side. The method leverages received signal strengths (RSSs) and online detection algorithm. Our method collects RSSs from nearby APs and normalizes them for accurate measurement. We measure the similarity of normalized RSSs. If the similarity between normalized RSSs is less than the fixed threshold value, we determine that the RSSs are generated from a fake device.We can measure the optimal threshold value derived from the sequential hypothesis testing. In our experiment, when the fixed threshold value was 2, the true positive was over than 99% and the false positive was less than 0.1% in three observations.
Yu Seung Kim, Patrick Tague, Heejo Lee, Hyogon Kim, "Carving Secure Wi-Fi Zones with Defensive Jamming", ACM Asia Conf. on Computer and Communications Security (ASIACCS), pp. 53-54, May. 2. 2012.
With rampant deployment of wireless technologies such as WLAN, information leakage is increasingly becoming a threat for its serious adopters such as enterprises. Research on an- tidotes has been mainly focused on logical measures such as authentication protocols and secure channels, but an inside collaborator can readily circumvent such defenses and wire- lessly divert the classified information to a conniver outside. In this paper, we propose a novel approach to the problem that forges a walled wireless coverage, a secure Wi-Fi zone in particular. Inspired by the fact that jamming as an attack is inherently difficult to defeat, we turn the table and use it as a defensive weapon to fend off the covert illegal access from outside. To validate the proposed approach, we con- duct extensive outdoor experiments with the IEEE 802.11g Wi-Fi adapters. The measurements show that the forged secure zones match well with the model prediction and that the defensive jamming approach can indeed be used to pro- tect wireless networks against information leakage. Lastly, we propose the algorithms to configure defensive jammers in arbitrary geometry.
Dongwon Seo, Heejo Lee, Adrian Perrig, "PFS: Probabilistic Filter Scheduling Against Distributed Denial-of-Service Attacks", IEEE Conf. on Local Computer Networks (LCN), pp. 9-17, Oct. 4. 2011.
Distributed denial-of-service (DDoS) attacks continue to pose an important challenge to current networks. DDoS attacks can cause victim resource consumption and link congestion. A filter-based DDoS defense is considered as an effective approach, since it can defend against both attacks: victim resource consumption and link congestion. However, existing filter-based approaches do not address necessary properties for viable DDoS solutions: how to practically identify attack paths, how to propagate filters to the best locations (filter routers), and how to manage many filters to maximize the defense effectiveness. We propose a novel mechanism, termed PFS (Probabilistic Filter Scheduling), to efficiently defeat DDoS attacks and to satisfy the necessary properties. In PFS, filter routers identify attack paths using probabilistic packet marking, and maintain filters using a scheduling policy to maximize the defense effectiveness. Our experiments show that PFS achieves 44% higher effectiveness than other filter-based approaches. Furthermore, we vary PFS parameters in terms of the marking probability and deployment ratio, and find that 30% marking probability and 30% deployment rate maximize the attack blocking rate of PFS.
Hyunsang Choi, Bin B. Zhu, Heejo Lee, "Detecting Malicious Web Links and Identifying Their Attack Types", USENIX Int'l Conf. on Web Application Development (WebApps), pp. 11-11, Jun. 15. 2011.
Malicious URLs have been widely used to mount various cyber attacks including spamming, phishing and mal- ware. Detection of malicious URLs and identification of threat types are critical to thwart these attacks. Know- ing the type of a threat enables estimation of severity of the attack and helps adopt an effective countermea- sure. Existing methods typically detect malicious URLs of a single attack type. In this paper, we propose method using machine learning to detect malicious URLs of all the popular attack types and identify the nature of at- tack a malicious URL attempts to launch. Our method uses a variety of discriminative features including tex- tual properties, link structures, webpage contents, DNS information, and network traffic. Many of these fea- tures are novel and highly effective. Our experimental studies with 40,000 benign URLs and 32,000 malicious URLs obtained from real-life Internet sources show that our method delivers a superior performance: the accu- racy was over 98% in detecting malicious URLs and over 93% in identifying attack types. We also report our stud- ies on the effectiveness of each group of discriminative features, and discuss their evadability.
Jonghoon Kwon, Jehyun Lee, Heejo Lee, "Hidden Bot Detection by Tracing Non-human Generated Traffic at the Zombie Host", Int'l Conf. on Information Security Practice and Experience (ISPEC), pp. 343-361, May. 30. 2011.
Defeating botnet is the key to secure Internet. A lot of cyber attacks are launched by botnets including DDoS, spamming, click frauds and information thefts. Despite of numerous methods have been proposed to detect botnets, botnet detection is still a challenging issue, as adversaries are constantly improving bots to write them stealthier. Existing anomaly-based detection mechanisms, particularly network-based approaches, are not sufficient to defend sophisticated botnets since they are too heavy or generate non-negligible amount of false alarms. As well, tracing attack sources is hardly achieved by existing mechanisms due to the pervasive use of source concealment techniques, such as an IP spoofing and a malicious proxy. In this paper, we propose a host-based mechanism to detect bots at the attack source. We monitor non-human generated attack traffics and trace their corresponding processes. The proposed mechanism effectively detects malicious bots irrespective of their structural characteristics. It can protect networks and system resources by shutting down attack traffics at the attack source. We evaluate our mechanism with eight real-life bot codes that have distinctive architectures, protocols and attack modules. In experimental results, our mechanism effectively detects bot processes in around one second after launching flood attacks or sending spam mails, while no false alarm is generated. end{abstract}
Dongwon Seo, Heejo Lee, Adrian Perrig, "Secure and Efficient Capability-based Power Management in the Smart Grid", IEEE Int'l Symposium on Parallel and Distributed Processing with Applications Workshops (ISPAW), pp. 119-126, May. 26. 2011.
As a smart grid is becoming a promising technology to control and save power generation and consumption, smart grid security should be a preliminary consideration to prevent from catastrophic failures. Especially, simultaneous excessive power consumption can be a significant issue, because power provider cannot react quickly to such massive demand that can cause blackouts through wide regions. Many studies, such as DDoS prevention schemes, have been done to solve excessive resource consumption for the legacy networks (e.g., the Internet). However, power management in the smart grid needs its own requirements: reliable power supply, privacy preservation, efficient data communication and malicious behavior detection. Existing smart grid schemes consider some of the requirements, but do not address all the requirements. In this paper, in order to satisfy the four requirements, we propose a secure and efficient power management mechanism leveraging a homomorphic data aggregation and capability-based power distribution. The proposed mechanism enables to gather the power demands of customers securely and efficiently, and to distribute power to customers who have the capability. Furthermore, each customer can verify whether one??s request is correctly delivered to the utility, and each distributor can detect misbehaving customers exceeding their capabilities. Through numerical measurements, using the proposed mechanism, a power provider consumes 11.12 seconds until power distribution. It is a tolerably short time for a power provider to endure against excessive power consumption
Kyoungsub Song, Dongwon Seo, Haemin Park, Heejo Lee, Adrian Perrig, "OMAP: One-way Memory Attestation Protocol for Smart Meters", IEEE Int'l Symposium on Parallel and Distributed Processing with Applications Workshops (ISPAW), pp. 111-118, May. 26. 2011.
A smart meter is one of the key elements of smart girds. An attacker can compromise smart meters by injecting malicious codes, and take financial benefits by modifying memory contents of the smart meters. An attestation scheme can prevent such a memory forgery attack as verifying memory contents. In smart grids, however, attestation processes are remotely performed through networks by a faraway utility. Therefore, attestation processes are exposed to network attacks such as man-in-the-middle (MITM) attacks. Even though existing attestation mechanisms detect local attacks such as the memory forgery, they are vulnerable to network attacks since they adopt a two-way attestation so-called a challenge-response protocol. In this paper, we propose a novel attestation mechanism, termed One-way Memory Attestation Protocol(OMAP), not only to detect local attacks, but also to defend against network attacks. Instead of using the two-way attestation, OMAP adopts an one-way attestation protocol; OMAP conducts a pre-defined internal algorithm, generates a checksum, and sends it to a verifier in one direction. Thus, OMAP does not require any information (e.g., challenges) from a verifier that can be exploitable by an adversary. In our experiments, as a smart meter scans only 0.004% of its memory, OMAP enables a verifier to detect memory modification with 95% probability if an attacker changes 20% of the memory.
Jonghoon Kwon, Jehyun Lee, Taebum Kim, Heejo Lee, "Subgraph-based Metamorphic Malwares Analysis", Int'l Conf. on Internet (ICONI), pp. 855-856, Dec. 2010.
Malware authors commonly used code obfuscation to evade detection mechanisms. When this technique is applied to malwares, they can change their instruction sequence and also even their signature. These malwares which have same functionality and different appearance are able to evade signature-based AV products. Thus, AV venders paid large amount of cost to analyze and classify malware for generating new signature. In this paper, we propose a new approach for analyzing metamorphic malwares. The proposed mechanism first converts malware??s API call sequences to CodeGraph through dynamic analysis. After that, we extract all subgraphs and analyze how similar two malware??s behaviors are through subgraph similarity. To validate proposed mechanism, we use 46 real-world malwares include 20 variants. In evaluation, all metamorphic malwares are classified correctly, and similar module behaviors among different malwares are also discovered.
Jehyun Lee, Jonghoon Kwon, Hyo-Jeong Shin, Heejo Lee, "Grouping Domain Names using DNS Query Graph", Int'l Conf. on Internet (ICONI), pp. 853-854, Dec. 2010.
Many malwares have the network activities for command and control, update, propagation, and so on. Because of the use of domain names while the network activities, those malicious activities are observed and have been blocked on the DNS. However, to evade static blacklists, recent malwares are using numbers of newly generated domain names. In this paper, we introduce a domain name grouping using DNS query graph containing query strategy of DNS clients. By grouping the domain names which have the statistically and sequentially similar query strategy, we extract the malicious domain names groups of the multi-domain malwares from the numerous numbers of domain names. From the experiments with the DNS trace of an ISP network, we find tens of multi-domain malwares, and commonly observed unique DNS query strategies. As the contribution of method, the grouping result enhances the efficiency of blacklists by detecting newly appeared malicious domain names.
Guhyeon Jeong, Euijin Choo, Joosuk Lee, Munkhbayar Bat-Erdene, Heejo Lee, "Generic Unpacking using Entropy Analysis", IEEE MALWARE, pp. 114-121, Oct. 20. 2010.
Malwares attempt to evade AV scanners using various obfuscation techniques. Packing is a popular obfuscation technique used by 80% of malwares. In this paper, we propose a generic unpacking mechanism to find the original entry point (OEP) using entropy analysis. The experiment using 110 packed executables demonstrates the proposed mechanism can locate the OEPs of 72% of the packed executables. Furthermore, we show how the mechanism could be applied to packed malwares.
Jehyun Lee, Jonghoon Kwon, Hyo-Jeong Shin, Heejo Lee, "Tracking Multiple C&C Botnets by Analyzing DNS Traffic", Workshop on Secure Network Protocols (NPSEC), pp. 67-72, Oct. 5. 2010.
Botnets have been considered as a main source of Internet threats. A common feature of recent botnets is the use of one or more C&C servers with multiple domain names for the purpose of increasing flexibility and survivability. In contrast with single domain botnets, these multi domain botnets are hard to be quarantined because they change domain names regularly for connecting their C&C server(s). In this paper, we introduce a tracking method of botnets by analyzing the relationship of domain names in DNS traffic generated from botnets. By examining the DNS queries from the clients which accessed the known malicious domain names, we can find a set of unknown malicious domain names and their relationship. This method enables to track malicious domain names and clients duplicately infected by multiple bot codes which make botnets revivable against existing quarantine methods. From the experiments with one hour DNS traffic in an ISP network, we find tens of botnets, and each botnet has tens of malicious domains. In addition to botnet domains, we find a set of other domain names used for spamming or advertising servers. The proposed method can be used for quarantining recent botnets and for limiting their survivability by tracking the change of domain names.
Tae Hwan Oh, Shinyoung Lim, Young B Choi, Kwang Roh park, Heejo Lee, Hyunsang Choi, "State of the Art of Network Security Perspectives in Cloud Computing", SUComS, Vol. 78, pp. 629-637, Sep. 17. 2010.
Cloud computing is now regarded as one of social phenomenon that satisfy customers? needs. It is possible that the customers? needs and the primary principle of economy ? gain maximum benefits from minimum investment ? reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user?s environment based on the customer?s needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.
Jusuk Lee, Kyoochang Jeong, Heejo Lee, "Detecting Metamorhpic Malware using Code Graphs", ACM SIGAPP Symposium on Applied Computing (ACM SAC), Mar. 22. 2010.
Malware writers and detectors have been running an endless battle. Self-defense is the weapon most malware writers prepare against malware detectors. Malware writers have tried to evade the improved detection techniques of anti-virus(AV) products. Packing and code obfuscation are two popular evasion techniques. When these techniques are applied to malwares, they are able to change their instruction sequence while maintaining their intended function. We propose a detection mechanism defeating these self-defense techniques to improve malware detection. Since an obfuscated malware is able to change the syntax of its code while preserving its semantics, the proposed mechanism uses the semantic invariant. We convert the API call sequence of the malware into a graph, commonly known as a call graph, to extract the semantic of the malware. The call graph can be reduced to a code graph used for semantic signatures of the proposed mechanism. We show that the code graph can represent the characteristics of a program exactly and uniquely. Next, we evaluate the proposed mechanism by experiment. The mechanism has an 91% detection ratio of real-world malwares and detects 300 metamorphic malwares that can evade AV scanners. In this paper, we show how to analyze malwares by extracting program semantics using static analysis. It is shown that the proposed mechanism provides a high possibility of detecting malwares even when they attempt self-protection.
Yu-seung Kim, Heejo Lee, "On Classifying and Evaluating the Effect of Jamming Attacks", Int'l Conf. on Information Networking (ICOIN), Jan. 27. 2010.
While various wireless networks have advanced rapidly and become an indispensable infrastructure in our network environment, jamming attacks are the common challenging problem that makes them unavailable. Jamming attack is easy to be launched with little efforts while its damage is severe since it disrupts the communication of all the nodes in the jamming range. In this paper, we expand the definition of traditional jamming in order to cover more attacks like linklayer jamming and propose taxonomies of jamming attacks and countermeasures in wireless networks. We also propose the generalized risk criteria to evaluate the effect of jamming and instantiate an application through case study. By providing a classification of jamming attacks and an evaluation of their effect, we expect to reveal the undeveloped area of related works and to foster studies on them.
Inhwan Kim, Hyunsang Choi, Heejo Lee, "BotXrayer : Exposing Botnets by Visualizing DNS Traffic", Int'l Conf. on Internet (ICONI), Dec. 18. 2009.
Botnets pose a major problem to Internet security. They can cause various online crimes such as DDoS attacks, identity thefts and spam e-mails. While there have been many attempts to detect botnets, most of these studies have difficulties in detecting botnets due to their evasive techniques to resemble normal traffic. In this paper, we propose a visualization method, BotXrayer, to detect botnets. It displays DNS traffic on the plane of parallel coordinates using four carefully selected parameters that represent a botnet hierarchy and attack patterns efficiently. BotXrayer provides a view of graphs that helps humans recognize botnet patterns intuitively. Observing botnets frequently generate DNS traffic that forms unique patterns, we develop six botnet attack signatures. We adopt four logic operations (XOR, AND, OR, SUB) to find hidden botnet identities and to display distinct botnet graphs from noisy lines on the coordinates. Experiments with real traces in /16 networks show that the proposed mechanism can detect various botnets effectively. Furthermore, botnet activities, such as launching DRDoS, poisoning DNS cache entries and sending spams, were captured.
Hyunsang Choi, Heejo Lee, Hyogon Kim, "BotGAD: Detecting Botnets by Capturing Group Activities in Network Traffic", Int'l Conf. on COMmunication System softWAre and middlewaRE (COMSWARE), Jun. 17. 2009.
Recent malicious attempts are intended to obtain financial benefits using a botnet which has become one of the major Internet security problems. Botnets can cause severe Internet threats such as DDoS attacks, identity theft, spamming, click fraud. In this paper, we define a group activity as an inherent property of the botnet. Based on the group activity model and metric, we develop a botnet detection mechanism, called BotGAD (Botnet Group Activity Detector). BotGAD enables to detect unknown botnets from large scale networks in real-time. Botnets frequently use DNS to rally infected hosts, launch attacks and update their codes. We implemented BotGAD using DNS traffic and showed the effectiveness by experiments on real-life network traces. BotGAD captured 20 unknown and 10 known botnets from two day campus network traces.
Euijin Choo, Heejo Lee, Wan Yeon Lee, "Dynamic Multimedia Scheduling against Motion Based DoS attacks", Int'l Conf. on Information Networking (ICOIN), Jan. 23. 2009.
Many motions lead to drastic degradation of QoS in multimedia communications. However, previous scheduling schemes do not consider the effect of motions in video streaming. Here we highlight the importance of handling motions and introduce a new kind of attack manipulating the amount of motions, called Motion based DoS (MDoS) attacks. And, we propose a dynamic multimedia scheduling scheme against MDoS attacks, called Scheduling Multimedia against MDoS attacks (SMaM). SMaM dynamically schedules video streams adaptive to the current network conditions. Specifically, SMaM increases resilience against MDoS attacks coping with motion traffic. Experimental results show that SMaM maximizes video quality regardless of the change in the amount of motions. Especially, SMaM minimizes the amount of useless frames for dynamically changing video, e.g. 45 % improvement over conventional scheduling schemes, which we call preferential packet scheduling (PPS). Also, SMaM presents low and stable average delay, which shows 16 % improvement over PPS schemes.
Jehyun Lee, Heejo Lee, Hoh Peter In, "Scalable Attack Graph for Risk Assessment", Int'l Conf. on Information Networking (ICOIN), Jan. 21. 2009.
The growth in the size of networks and the number of vulnerabilities is increasingly challenging to manage network security. Especially, difficult to manage are multi-step attacks which are attacks using one or more vulnerabilities as stepping stones. Attack graphs are widely used for analyzing multi-step attacks. However, since these graphs had large sizes, it was too expensive to work with. In this paper, we propose a mechanism to manage attack graphs using a divide and conquer approach. To enhance efficiency of risk analyzer working with attack graphs, we converted a large graph to multiple sub-graphs named risk units and provide the light-weighted graphs to the analyzers. As a result, when k order of time complexity algorithms work with an attack graph with n vertices, a division having c of overhead vertices reduces the workloads from nk to r(n + c)k. And the coefficient r becomes smaller geometrically from 2−k depended on their division rounds. By this workload reduction, risk assessment processes which work with large size attack graphs become more scalable and resource practical.
Le Xuan Hung, Riaz A.S, Hassan J., S.M.K. Raazi, Y. Weiwei, NT Canh, P.T.H.Truc, Sungyoung Lee, Heejo Lee, Yuseung Son adn Migu, "Activity-Oriented Access Control for Ubiquitous Environments", IEEE Int'l Conf. on Consumer Communications & Networking (IEEE CCNC), Jan. 10. 2009.
Recent research on ubiquitous computing has introduced a new concept of activity-based computing as a way of thinking about supporting human activities in ubiquitous computing environment. Existing access control approaches such as RBAC, became inappropriate to support this concept because they do not consider human activities. In this paper, we propose Activity-Oriented Access Control (AOAC) model, aiming to support user’s activity in ubiquitous environments. We have designed and implemented our initial AOAC system. We also built up a simple scenario in order to illustrate how it supports user activities. The results have shown that AOAC meets our objectives. Also, AOAC it takes approximately 0.26 second to give a response which proves that AOAC is suitable to work in real-time environments.
Peng Li, Hyundo Park, Debin Gao, Jianming Fu, "Bridging the Gap between Data-flow and Control-flow Analysis for Anomaly Detection", Annual Computer Security Applications Conf. (ACSAC), Dec. 2008.
Host-based anomaly detectors monitor the control-flow and data-flow behavior of system calls to detect intrusions. Control-flow-based detectors monitor the sequence of system calls, while data-flow-based detectors monitor the data propagation among arguments of system calls. Besidespointing out that data-flow-based detectors can be layered on top of control-flow-based ones (or vice versa) to improve accuracy, there is a large gap between the two research directions in that research along one direction had been fairly isolated and had not made good use of results from the other direction. In this paper, we show how data-flow analysis can leverage results from control-flow analysis to learn more accurate and useful rules for anomaly detection. Our results show that the proposed control-flow-analysis-aided data-flow analysis reveals some accurate and useful rules that cannot be learned in prior data-flow analysis techniques. These relations among system call arguments and return values are useful in detecting many real attacks. A tracedriven evaluation shows that the proposed technique enjoys low false-alarm rates and overhead when implemented on a production server.
Riaz Ahmed Shaikh, Hassan Jameel, Brian J. d'Auriol, Sungyoung Lee, Young-Jae Song, Heejo Lee, "Trusting Anomaly and Intrusion Claims for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks", Int'l Conf. on Young Computer Scientists (ICYCS), Nov. 2008.
Any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at the modest communication cost.
Inhwan Kim, Hyunsang Choi, Heejo Lee, "Botnet Visualization using DNS Traffic", Int'l Workshop on Information Security Applications (WISA), Sep. 25. 2008.
One of the major challenges for network security is the botnet. It is one of the major causes of network threats such as spam, DDoS(distributed denaialof- service) attacks, and so on. To be sure, there have been studies specifically concerning botnet detection, but most of these studies can detect specific types of botnets only with an offline analysis making it hard to respond to a botnet immediately. In this paper, we describe our development of a visualization mechanism using DNS (Domain Name System) traffic. The goal of our mechanism is to provide a network administrator with meaningful visual information allowing the administrator to detect botnets intuitively. We can reveal botnets from DNS traffic. And our mechanism can afford to operate in real-time, because the DNS possesses a small amount of network traffic. The mechanism is comprised of parallel coordinates to describe a botnet in an intuitive graphical pattern. The coordinates represent three different parameters in a DNS packet. The color of a line in parallel coordinates is determined by statistical values from the cumulative series of duplicated DNS data over time. We define four patterns of graphs as a signature for the visualization system. And we have demonstrated that our mechanism shows the effectiveness in revealing many important aspects of real-world botnet patterns by experiments on /16 campus network traces.
Hyundo Park, Peng Li, Debin Gao, Heejo Lee, Robert H. Deng, "Distinguishing between FE and DDoS Using Randomness check", Int'l Conf. on Information Security (ISC), Vol. 5222, pp. 131-145, Sep. 16. 2008.
Threads posed by Distributed Denial of Service (DDoS) attacks are becoming more serious day by day. Accurately detecting DDoS becomes an important and necessary step in securing a computer network. However, Flash Event (FE), which is created by legitimate requests, shares very similar characteristics with DDoS in many aspects and makes it hard to be distinguished from DDoS attacks. In this paper, we propose a simple yet effective mechanism called FDD (FE and DDoS Distinguisher) to distinguish FE and DDoS. To the best of our knowledge, this is the first effective and practical mechanism that distinguishes FE and DDoS attacks. Our trace-driven evaluation shows that FDD distinguishes between FE and DDoS attacks accurately and efficiently by utilizing only memory of a very small size, making it possible to be implemented on high-speed networking devices.
Jeheon Han, Jonghoon Kwon, Heejo Lee, "HoneyID : Unveiling Hidden Spywares by Generating Bogus Events", IFIP Int'l Conf. on ICT Systems Security and Privacy Protection (IFIP SEC), Vol. 278, pp. 669-673, Sep. 9. 2008.
A particular type of spyware which uses the user’s events covertly, such as keyloggers and password stealers, has become a big threat to Internet users. Due to the prevalence of spywares, the user’s private information can easily be exposed to an attacker. Conventional anti-spyware programs have used signatures to defend against spywares. Unfortunately, this mechanism cannot detect unknown spywares. In this paper, we propose a spyware detection mechanism, called HoneyID, which can detect unknown spywares using an enticement strategy. HoneyID generates bogus events to trigger the spyware’s actions and then detects hidden spywares among running processes which operate abnormally.We implemented the HoneyID mechanism as a windows based, and evaluated it’s effectiveness against 6 different known spywares(3 keyloggers and 3 ftp password sniffers). From this study, we show that the HoneyID can be effective to detect unknown spywares with high accuracy.
Dongwon Seo, Heejo Lee, Ejovi Nuwere, "Detecting More SIP Attacks on VoIP Services by Combining Rule Matching and State Transition Models", IFIP Int'l Conf. on ICT Systems Security and Privacy Protection (IFIP SEC), Vol. 278, pp. 397-411, Sep. 9. 2008.
The Session Initiation Protocol (SIP) has been used widely for Voice over IP (VoIP) service because of its potential advantages, economical efficiency and call setup simplicity. However, SIP-based VoIP service basically has two main security issues, malformed SIP message attack and SIP flooding attack. In this paper, we propose a novel mechanism for SIP-based VoIP system utilizing rule matching algorithm and state transition models. It detects not only two main attacks, but also covers more SIP attacks. Instead of simply combining rule comparison and counting number of SIP messages, we develop secure RFC 3261 rules based on existing RFC 3261 rules, so that proposed mechanism shows 26% higher detection rate for malformed attack. Moreover, we utilize session information and define the features of each state in order to detect abnormal situations including SIP flooding. As the result, it is shown that the proposed mechanism provides not only higher accuracy, but also covering more SIP attacks including two main attacks.
Riaz Ahmed Shaikh, Hassan Jameel, Brian J. d'Aurio, Sungyoung Lee, Young-Jae Song, Heejo Lee, "Network Level Privacy for Wireless Sensor Networks", Int'l Conf. on Information Assurance and Security (IAS), pp. 261-266, Sep. 2008.
Full network level privacy spectrum comprises of identity, route, location and data privacy. Existing privacy schemes of wireless sensor networks only provide partial network level privacy. Providing full network level privacy is a critical and challenging problem due to the constraints imposed by the sensor nodes, sensor networks and QoS issues. In this paper, we propose full network level privacy solution that addresses this problem. This solution comprises of Identity, Route and Location (IRL) privacy algorithm and data privacy mechanism, that collectively provides protection against privacy disclosure attacks such as eavesdropping and hop-by-hop trace back attacks.
Sunghyun Kim, Heejo Lee, "Reducing Payload Scans for Attack Signature Matching Using Rule Classification", Australasian Conf. on Information Security and Privacy (ACISP), Vol. 5107, pp. 350-360, Jul. 7. 2008.
Network intrusion detection systems rely on a signature-based detection engine.When under attack or during heavy traffic, the detection engines need to make fast decision whether a packet or a sequence of packets is normal or malicious. However, if packets have a heavy payload or the system has a great deal of attack patterns, the high cost of payload inspection severely diminishes the detection performance. Therefore, it would be better to avoid unnecessary payload scans by checking the protocol fields in the packet header first, before executing their heavy operations of payload inspection. Furthermore, when payload inspection is necessary, it is better to compare attack patterns as few as possible. In this paper, we propose a method which reduces payload scans by an integration of processing protocol fields and classifying payload signatures.While performance improvements are dependent on a given networking environment, the experimental results with the DARPA data set show that the proposed method outperforms the latest Snort over 6.5% for web traffic
Keun Park, Dongwon Seo, Jaewon Yoo, Heejo Lee, Hyogon Kim, "Unified Rate Limiting in Broadband Access Networks for Defeating Internet Worms and DDoS Attacks", Int'l Conf. on Information Security Practice and Experience (ISPEC), Vol. 4991, pp. 176-187, Apr. 21. 2008.
Internet worms and DDoS attacks are considered the two most menacing attacks on today’s Internet. The traditional wisdom is that they are different beasts, and they should be dealt with independently. In this paper, however, we show that a unified rate limiting algorithm is possible, which effectively works on both Internet worms and DDoS attacks. The unified approach leads to higher worm traffic reduction performance than that of existing rate limiting schemes geared toward worm mitigation, in addition to the added advantage of dropping most DDoS attack packets. In our experiments with attack traffics generated by attacking tools, the unified rate limiting scheme drops 80.7% worm packets and 93% DDoS packets, while 69.2% worms and 3.4% DDoS packets are dropped at maximum by previous worm scan rate limiting schemes. Also, the proposed scheme requires less computing resources, and has higher accuracy for dropping attack packets but not dropping legitimate packets.
Minjin Kwon, Kyoochang Jeong, Heejo Lee, "PROBE: A Process Behavior-Based Host Intrusion Prevention System", Int'l Conf. on Information Security Practice and Experience (ISPEC), Vol. 4991, pp. 203-217, Apr. 2008.
Attacks using vulnerabilities are considered nowadays a severe threat. Thus, a host needs a device that monitors system activities for malicious behaviors and blocks those activities to protect itself. In this paper, we introduce PROcess BEhavior (PROBE), which monitors processes running on a host to identify abnormal process behaviors. PROBE makes a process tree using only process creation relationship, and then it measures each edge weight to determine whether the invocation of each child process causes an abnormal behavior. PROBE has low processing overhead when compared with existing intrusion detections which use sequences of system calls. In the evaluation on a representative set of critical security vulnerabilities, PROBE shows desirable and practical intrusion prevention capabilities estimating that only 5% false-positive and 5% false-negative. Therefore, PROBE is a heuristic approach that can also detect unknown attacks, and it is not only light-weight but also accurate.
Kyoochang Jeong, Heejo Lee, "Code Graph for Malware Detection", Int'l Conf. on Information Networking (ICOIN), Jan. 2008.
When an application program is executed for the first time, the results of its execution are not always predictable. Since the host will be damaged by a malware as soon as it is executed, detecting and blocking the malware before its execution is the most effective means of protection. In contrast to current research into the detection of malwares based on their behavior while being executed, we propose a new mechanism which can preview the effect of a program on a system. The mechanism we developed is to represent the distinctions between portable executable binaries. The proposed mechanism analyzes the instructions related to the system-call call sequence in a binary executable and demonstrates the result in the form of a topological graph. This topological graph is called the code graph and the preview system is called the code graph system. We have tested various real application programs with the code graph system and identified their distinctive characteristics which can be used for distinguishing normal softwares from malwares such as worm codes and botnet programs. Our system detected all known malwares used in the experiment, and distinguished 67% of unknown malwares from normal programs. In this paper, we show how to analyze the effects of executable binaries before their execution and normal softwares can be effectively distinguished from malwares by applying the code graph.
Ngo Trong Canh, Tran Van Phuong, Young-Koo Lee, Sungyoung Lee, Heejo Lee, "A Location-aware Key Predistribution Scheme for Distributed Wireless Sensor Networks", IEEE Int'l Conf. on Networks (IEEE ICON), pp. 188-193, Nov. 19. 2007.
Key establishment plays a central role in authentication and encryption in wireless sensor networks, especially when they are mainly deployed in hostile environments. Because of the strict constraints in power, processing and storage, designing an efficient key establishment protocol is not a trivial task. Also, it is infeasible to apply public key techniques onto large-scale wireless sensor networks. Most of proposed solutions are based on symmetric key techniques and mainly focused on key predistribution mechanism. In this paper, we present a new key predistribution scheme using bivariate polynomial combining with expected deployment knowledge. We show that our approach takes advantage in terms of resilience against node compromised
Hyunsang Choi, Hanwoo Lee, Heejo Lee, Hyogon Kim, "Botnet Detection by Monitoring Group Activities in DNS Traffic", IEEE Int'l Conf. Computer and Information Technology (IEEE CIT), Oct. 17. 2007.
Recent malicious attempts are intended to get financial benefits through a large pool of compromised hosts, which are called software robots or simply “bots.” A group of bots, referred to as a botnet, is remotely controllable by a server and can be used for sending spam mails, stealing personal information, and launching DDoS attacks. Growing popularity of botnets compels to find proper countermeasures but existing defense mechanisms hardly catch up with the speed of botnet technologies. In this paper, we propose a botnet detection mechanism by monitoring DNS traffic to detect botnets, which form a group activity in DNS queries simultaneously sent by distributed bots. A few works have been proposed based on particular DNS information generated by a botnet, but they are easily evaded by changing bot programs. Our anomaly-based botnet detection mechanism is more robust than the previous approaches so that the variants of bots can be detectable by looking at their group activities in DNS traffic. From the experiments on a campus network, it is shown that the proposed mechanism can detect botnets effectively while bots are connecting to their server or migrating to another server.
Nguyen Ngoc Diep, Sungyoung Lee, Young-Koo Lee, HeeJo Lee, "A Privacy Preserving Access Control Scheme using Anonymous Identification", IEEE Int'l Conf. on Embedded and Real-Time Computing Systems and Applications (IEEE RTCSA), pp. 482-487, Aug. 21. 2007.
Compared to all emerging issues, privacy is probably the most prominent concern when it comes to judging the effects of a wide spread deployment of ubiquitous computing. On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users prefer not to expose any sensitive information to anybody. They want to have complete control on their personal data, without being tracked down for wherever they are, whenever and whatever they do. In this paper, we introduce an anonymous identification authentication and access control scheme to secure interactions between users and services in ubiquitous environments. The scheme uses anonymous user ID, sensitive data sharing method, and account management to provide a lightweight authentication while keeping users anonymously interacting with the services in a secure and flexible way.
Weiwei Yuan, Donghai Guan, Sungyoung Lee, Young-Koo Lee, "A Reputation System based on Computing with Words", Int'l Conf. on Wireless Communications and Mobile Computing (IWCMC), pp. 132 - 137, Aug. 12. 2007.
Reputation system is a way to maintain trust in dynamic environments by collecting, distributing and aggregating feedbacks about the service providers' past behaviors. Most existing reputation systems assume that raters evaluate the ratee by means of numerical values. However, raters sometimes cannot express their judgments with exact numerical values, especially when the raters have uncertain or ambiguous opinions on the ratee. Our paper introduces a novel reputation system based on the methodology of Computing with Words (CW), in which the ratings and reputations of computation are words and propositions drawn from a natural language instead of numerical values. Our reputation system has a sound mathematical basis. At the same time, it is convenient for the raters to express their judgments and simple for the participants to understand the integrated reputation.
Le Xuan Hung, Sungyoung Lee, Young-Koo Lee, Heejo Lee, "Activity-based Access Control to Hospital Information", IEEE Int'l Conf. on Embedded and Real-Time Computing Systems and Applications (IEEE RTCSA), pp. 448-496, Aug. 2007.
Hospital work is characterized by the need to manage multiple activities simultaneously, constant local mobility, frequently interruptions, and intense collaboration and communication. Hospital employees must handle a large amount of data that is often tied to specific work activities. This calls for a proper access control model. In this paper, we propose a novel approach, Activity-based access Control Model (ACM). Unlike conventional approaches which exploit user identity/role information, ACM leverages user’s activities to determine the access permissions for that user. In ACM, a user is assigned to perform a number of actions if s/he poses a set of satisfactory attributes. Access permissions to hospital information are granted according to user’s actions. By doing this, ACM contributes a number of advantages over conventional models: (1) facilitates user’s work; (2) reduces complexity and cost of access management. Though the design of ACM first aims to support clinical works in hospitals, it can be applied in other activity-centered environments.
Pho Duc Giang, Le Xuan Hung, Riaz Ahmed Shaikh, Yonil Zhung, Sungyoung Lee, Young-Koo Lee, Heejo Lee, "A Trust-Based Approach to Control Privacy Exposure in Ubiquitous Computing Environments", Int'l Conf. on Pervasive Services (ICPS), pp. 149 - 152, Jul. 2007.
Abstract—In Ubiquitous Computing environments, service servers play a central role of actively providing information about a person to help people determine whether he is available for contact or not. A tradeoff exists in these systems: the more sources of data and the higher fidelity in those sources which can improve people’s decision, the more privacy reduction. Alternatively, there is generally no a priori trust relationship among entities interacting in pervasive computing environments which makes it essential to establish trust from scratch. This task becomes extremely challenging when it is simultaneously necessary to protect the privacy of the users involved. In this paper, we first show how trust evaluation process of the user’s system can be based on previous interactions and peer recommendations. A solution then relied on trust to control privacy disclosure is proposed that depends on pre-defined privacy policy. Several tuning parameters and options are suggested so that end-users can customize to meet the security and privacy requirement of a ubiquitous system.
Nguyen Ngoc Diep, Sungyoung Lee, Young-Koo Lee, Heejo Lee, "Contextual Risk-based Access Control", Int'l Conf. on Security and Management (SAM), pp. 406-412, Jun. 2007.
Context-based access control is an emerging approach for modeling adaptive solution, making access control management more flexible and powerful. However, these strategies are inadequate for the increased flexibility and performance that ubiquitous computing environment requires because such systems can not utilize effectively all benefit from this environment. In this paper, we propose a solution based on risk to make use of many context parameters in order to provide good decisions for a safety environment. We design a new model for risk assessment in ubiquitous computing environment and use risk as a key component in decision-making process in our access control model.
ChangHee Lee, Heejo Lee, "A Password Stretching Method with User Specific Salts", Int'l World Wide Web Conf. (WWW), May. 2007.
In this paper, we present a password stretching method based on user specific salt. Our scheme takes a similar time to stretch a password as a recent password stretching algorithm, but the complexity of pre-computation attack increases by 10^8 times and also the storage to store pre-computation result increases by 10^8 times over a recent password stretching algorithm.
Wan Yeon Lee, Heejo Lee, Hyogon Kim, "Energy-Aware QoS Adjustment of Multimedia Tasks with Uncertain Execution Time", Int'l Conf. on Computational Science (ICCS), Vol. 4490, pp. 709-712, May. 2007.
In order to make the best use of available energy budget, we propose a QoS adjustment method which maximizes the total QoSprovisioning of multimedia tasks with uncertain execution time. This method utilizes the probability distribution of task’s execution time to determine an instant QoS level. Our experiments show that the proposed method gives 52% more QoS-provisioning than the conventional method using a constant QoS level derived from the worst-case time.
Euijin Choo, Jehyun Lee, Heejo Lee, Giwon Nam, "SRMT: A Lightweight Encryption Scheme for Secure Real-time Multimedia Transmission", Int'l Conf. on Multimedia and Ubiquitous Engineering (MUE), pp. 60-65, Apr. 2007.
Securing multimedia transmission has become a challenging issue due to the popularization of real-time multimedia applications such as video surveillance, satellite communication and web cams. However, previous security algorithms do not always guarantee a satisfactory degree of media quality and latency. In order to provide both security and media QoS, a viable security mechanism must consider three properties: processing time, compression rate and security level. In this paper, we propose a light-weight encryption scheme without loss of security and media QoS, called Secure Real-time Media Transmission (SRMT) using two block transpositions and a XOR operation. SRMT is studied with respect to MPEG-4, which is widely used in today's multimedia applications. Experimental results with various MPEG-4 movies show that the SRMT scheme achieves real-time transmission of encrypted media data without loss of security and media QoS. Though SRMT is conducted on uncompressed raw data, SRMT encrypts 3 times faster than the AES encryption of MPEG compressed data. Also, we show that manipulating key frames and a compression method can lessen increasing ratio of encrypted MPEG size, e.g., 70.5% improvement over an existing combination method of block transpositions and XOR operations.
Pho Duc Giang, Le Xuan Hung, Sungyoung Lee, Young-Koo Lee, Heejo Lee, "A Flexible Trust-Based Access Control Mechanism for Security and Privacy Enhancement in Ubiquitous Systems", Int'l Conf. on Multimedia and Ubiquitous Engineering (MUE), pp. 698-703, Apr. 2007.
It is the ubiquity and mobility absolutely necessary for ubiquitous computing environments that raise new challenges for pervasive service provision invisibly. Particularly, mobility of users/devices causes unpredefined and unpredictable changes in physical location and in available resources and services, event at runtime and during the same service session, thus forcing us to consider very dynamic aspects of evaluation when designing an access control mechanism. Alternatively, there is generally no a priori trust relationship among entities interacting in pervasive computing environments which makes it essential to establish trust from scratch. This task becomes extremely challenging when it is simultaneously necessary to protect the privacy of the users involved. In this paper*, we first show how trust evaluation process of the user’s system can be based on previous accesses and peer recommendations. A solution then relied on trust to control access is proposed that depends upon pre-defined access control security policy. Several tuning parameters and options are suggested so that end-users can customize to meet the security and privacy requirement of a ubiquitous system.
Heejo Lee, Minjin Kwon, Geoffrey Hasker, Adrian Perrig, "BASE: An Incrementally Deployable Mechanism for Viable IP Spoofing Prevention", ACM Asia Conf. on Computer and Communications Security (ASIACCS), pp. 20 - 31, Mar. 2007.
DoS attacks use IP spoofing to forge the source IP address of packets, and thereby hide the identity of the source. This makes it hard to defend against DoS attacks, so IP spoofing will still be used as an aggressive attack mechanism even under distributed attack environment. While many IP spoofing prevention techniques have been proposed, none have achieved widespread real-world use. One main reason is the lack of properties favoring incremental deployment, an essential component for the adoption of new technologies. A viable solution needs to be not only technically sound but also economically acceptable. An incrementally deployable protocol should have three properties: initial benefits for early adopters, incremental benefits for subsequent adopters, and effectiveness under partial deployment. Since no previous anti-spoofing solution satisfies all three of these properties, we propose a new mechanism called "BGP Anti-Spoofing Extension" (BASE). The BASE mechanism is an anti-spoofing protocol designed to fulfill the incremental deployment properties necessary for adoption in current Internet environments. Based on simulations we ran using a model of Internet AS connectivity, BASE shows desirable IP spoofing prevention capabilities under partial deployment. We find that just 30% deployment can drop about 97% of attack packets. Therefore, BASE not only provides adopters' benefit but also outperforms previous anti-spoofing mechanisms.
Nguyen Ngoc Diep, Le Xuan Hung, Yonil Zhung, Sungyoung Lee, Young-Koo Lee, Heejo Lee, "Enforcing Access Control Using Risk Assessment", European Conf. on Universal Multiservice Networks (ECUMN), Feb. 14. 2007.
Context-based access control is an emerging approach for modeling adaptive solution, making access control management more flexible and powerful. But in the ubiquitous environment, this approach is not enough for many emerging security vulnerabilities. Thus, improving current access control mechanisms is still necessary. Risk is an effective tool used for decision-making in economics. In this paper, we design a new model for risk assessment in ubiquitous environment and use risk as a key component in decision-making process in our access control model. This solution makes access control management more dynamic and precise.
Soo Kim, Jeong-Gil Ko, Jongwon Yoon, Heejo Lee, "Multiple-Objective Metric for Placing Multiple Base Stations in Wireless Sensor Networks", Int'l Symposium on Wireless Pervasive Computing (ISWPC), Feb. 2007.
The placement of base stations in wireless sensor networks affects the coverage of sensor nodes, the tolerance against faults or attacks, the energy consumption and the congestion from communication. However, previous studies mostly focus on the placement of base stations to improve a partial property, not considering all of them. In this paper we propose Multiple- Objective Metric (MOM), which reflects four different metrics for base station placement in wireless sensor networks. First, the ratio of sensor nodes which can communicate with a base station via either single-hop or multi-hop represents the coverage of sensor nodes. Second, the average ratio of sensor nodes after the failure of base stations represents the fault tolerance of a network. Third, the average distance between sensor nodes and their nearest base station represents the energy consumption of a network. Fourth, the standard deviation of the degree of base stations represents the average delay of a network due to congestion. We show that placing multiple base stations using our proposed MOM can fairly increase various properties of wireless sensor networks.
Hassan Jameel, Riaz Ahmed Shaikh, Heejo Lee, Sungyoung Lee, "Human Identification Through Image Evaluation Using Secret Predicates", RSA Conf. (CT-RSA), Vol. 4377, No. 67, Feb. 2007.
The task of developing protocols for humans to securely authenticate themselves to a remote server has been an interesting topic in cryptography as a replacement for the traditional, less secure, password based systems. The protocols proposed in literature are based on some underlying difficult mathematical problem, which are tuned so as to make them easily computable by humans. As a result these protocols are easily broken when desired to be efficiently executable. We present a Human Identification Protocol based on the ability of humans to efficiently process an image given a secret predicate. It is a challenge-response protocol in which a subset of images presented satisfies a secret predicate shared by the challenger and the user. We conjecture that it is hard to guess this secret predicate for adversaries, both humans and programs. It can be efficiently executed by humans with the knowledge of the secret which in turn is easily memorable and replaceable. We prove the security of the protocol separately for human adversaries and programs based on two separate assumptions and justify these assumptions with the help of an example implementation.
Soo Kim, Jeong-Gil Ko, Jongwon Yoon, Heejo Lee, "Energy-Efficient and Fault-Tolerant Positioning of Multiple Base Stations", Int'l Conf. on Information Networking (ICOIN), Vol. 5200, pp. 584-593, Jan. 2007.
As the nodes have limited battery power inWireless Sensor Networks (WSNs), energy efficiency and fault tolerance should be the two major issues in designing WSNs. However, previous studies on positioning base stations (BSs) in WSNs are focused on energy efficiency only. Yet, mission-critical applications like emergency medical care systems should be guaranteed continuous services considering fault tolerance. In this paper we propose to place multiple BSs considering not only energy efficiency but also fault tolerance.We present two strategies to find the optimal position of BSs; (1) minimizing the average transmission energy for energy efficiency; and (2) minimizing additional energy consumption after a BS failure for fault tolerance. The optimal positions for multiple BSs are derived by the metric that considers both energy efficiency and fault tolerance, with a weight factor. Our simulation results show that fault tolerance is important and strongly related to elongation of network lifetime. In addition, we show that our proposed scheme is more energy-effective than previously suggested strategies on unexpected environmental changes which occur commonly in WSNs and sustain the network lifetime effectively under BS failures.
Weiwei Yuan, Donghai Guan, Sungyoun Lee, Young-Koo Lee, Heejo Lee, "Filtering Out Unfair Recommendations for Trust Model in Ubiguitous Environments", Int'l Conf. on Information Systems Security (ICISS), Vol. 4332, pp. 357-360, Dec. 2006.
This paper presents a novel context-based approach to filter out unfair recommendations for trust model in ubiquitous environments. Context is used in our approach to analyze the user’s activity, state and intention. Incremental learning based neural network is used to dispose the context in order to find doubtful recommendations. This approach has distinct advantages when dealing with randomly given irresponsible recommendations, individual unfair recommendations as well as unfair recommendations flooding.
Hyogon Kim, Heejo Lee, Sangmin Shin, Inhye Kang, "On the cross-layer impact of TCP ack thinning on IEEE 802.11 wireless MAC dynamics", IEEE Vehicular Technology Conf. (IEEE VTC), Sep. 27. 2006.
ACK thinning refers to the technique to discard or reduce TCP acknowledgements (ACKs) for the purpose of diverting scarce bandwidth to TCP data traffic. Delayed ACK and ACK filtering fall into the category. It has been shown that under some circumstances the technique is effective to boost the TCP throughput on wireless links, in particular the IEEE 802.11 wireless LAN (WLAN). In this paper, however, we show that ACK thinning backfires under congestion due to its cross-layer impact on the 802.11 MAC dynamics. With the ACK filtering example, we demonstrate the phenomenon and analyze the cause. Based on the analysis, we show how the IEEE 802.11 contention window size control solves the problem. Although only the ACK filtering and Delayed ACK are considered in this paper, any other techniques to save on TCP ACK bandwidth usage share the same fundamental issues.
Sangki Yun, Hyogon Kim, Heejo Lee, Inhye Kang, "Improving VoIP call capacity of multi-hop wireless networks through self-controlled frame aggregation", IEEE Vehicular Technology Conf. (IEEE VTC), Sep. 2006.
In multi-hop wireless networks, the number of supportable VoIP calls can be surprisingly small due to the increased spatial interference. To mitigate the interference, voice frame aggregation can be used. In this paper, we depart from the traditional approaches that perform aggregation at the voice source, and propose a technique called the Self-Controlled Frame Aggregation (SCFA) that runs at wireless routers. The core idea of SCFA is to let the congestion itself control the degree of aggregation. Unlike existing frame aggregation approaches, SCFA does not incur fixed delay cost, since it is used only when and by exactly as much as it is needed. In this paper, we take the example of 802.11-based multi-hop network to show the impact of SCFA, since many emerging multi-hop networks are built on the 802.11 technology. The result shows that SCFA on 802.11-based multi-hop network can boost the number of calls approximately twofold, or extends the hop distance threefold for a given number of calls to carry.
Soo Kim, Heejo Lee, Wanyeon Lee, "Improving Resiliency of Network Topology with Enhanced Evolving Strategies", IEEE Int'l Conf. Computer and Information Technology (IEEE CIT), Sep. 2006.
Recent studies have shown that many real networks follow the power-law distribution of node degrees. Instead of random connectivity, however, power-law connectivity suffers from the vulnerability of targeted attacks, since its interconnection is heavily relying on a very few nodes. In addition, the connectivity of power-law networks becomes more concentrated on the small group of nodes as time goes by, which can be explained by Barabasi and Albert's rich-get-richer model. The rich-get-richer model is known as the most widely accepted generative model and follows the rule of preferential attachment to high-degree nodes. Thus, the preference of high-degree nodes to connect a newly created node renders the network less resilient as evolves. In this paper, we propose three different evolving strategies which can be applicable to the Internet topologies and the resiliency of evolving networks are measured by two resiliency metrics. From the experiments, we show that choosing an appropriate evolving strategy is more effective to increase the resiliency of network topology, rather than simply adding more links. Also, we show the possibility of improving the attack resiliency of Internet topology by adapting only a part of networks, e.g. 20- 40%, to a new evolving strategy, such as change from the maxdegree preference to the average-degree preference, which can be considered as a practical range of deployment.
Beomsoo Park, Sungjin Hong, Jaewook Oh, Heejo Lee, Yoojae Won, "One Touch Logon: Replacing Multiple Passwords with Single Fingerprint Recognition", IEEE Int'l Conf. Computer and Information Technology (IEEE CIT), Sep. 2006.
User authentication is still heavily reliant on the use of passwords, and the security problems associated with passwords are becoming more and more serious. The main causes of these problems are the prevalence of password sniffing and the difficulty of password management due to the increased number of accessible systems. In this paper, we propose a personal password management system called "One Touch Logon”, which replaces the annoying password-based authentication systems with a simple touch-and-login method. The effectiveness of the proposed system is demonstrated by implementing it on widely-used legacy systems such as Microsoft Windows and Web site logons. This mechanism is easy to implement and integrate with current password-based systems through the use of an inexpensive consumer electronic device allowing for fingerprint recognition. Moreover, eliminating the burden of memorizing multiple passwords enables the user to choose hard-to-guess passwords and further increases the utilization of Internet services while improving their accessibility. Our empirical study shows that One Touch Logon gives more benefits as the number of employing sites increases, especially when exceeding the number three.
Taek Lee, Hoh Peter In, Eulgyu Im, Heejo Lee, "Cascade Damage Estimation Model for Internet Attacks", Workshop on Intelligence and Security Informatics (WISI), Vol. 3917, pp. 163-164, Apr. 2006.
Risk analysis and damage estimation are inevitable studies to gain essential data for making a better decision in security investment. The most reasonable metrics to measure the damage of a security accident are recovery cost and business opportunity cost. In the case of a worm accident, the costs mean just the direct damage caused by infected systems. However, collaterally cascading damage is also serious damage which can impact on other innocent systems having depended on the infected systems for the purpose of processing their business or demanding some service.
Hyundo Park, Heejo Lee, "Detecting Unknown Worms using Randomness Check", Int'l Conf. on Information Networking (ICOIN), Vol. 3961, pp. 775-784, Jan. 2006.
From the appearance of CodeRed and SQL Slammer worm, we have learned that the early detection of worm epidemics is important to reduce the damage caused by their outbreak. One prominent characteristic of Internet worms is to choose next targets randomly by using a random generator. In this paper, we propose a new worm detection mechanism by checking the random distribution of destination addresses. Our mechanism generates the traffic matrix and checks the value of rank of it to detect the spreading of Internet worms. From the fact that a random binary matrix holds a high value of rank, ADUR (Anomaly Detection Using Randomness check) is proposed for detecting unknown worms based on the rank of the traffic matrix. From the experiments on various environments, we show that the ADUR mechanism effectively detects the spread of new worms in an early stage, even when there is only one host infected in a monitoring network.
Hyunsang Choi, Heejo Lee, "PCAV: Internet Attack Visualization on Parallel Coordinates", Int'l Conf. on Information and Communications Security (ICICS), Vol. 3783, pp. 454-466, Dec. 2005.
This paper presents PCAV (Parallel Coordinates Attack Visualizer), a real-time visualization system for detecting large-scale Internet attacks including Internet worms, DDoS attacks and network scanning activities. PCAV displays network traffic on the plane of parallel coordinates using the source IP address, destination IP address, destination port and the average packet length in a flow. These four values are used to draw each flow as a connected line on the plane and surprisingly a group of lines forms a particular shape in case of attack. Thus, a simple but novel way of displaying traffic reveals ongoing attacks. From the fact that numerous types of attacks form a specific pattern of graphs, we have developed nine signatures and their detection mechanism using an efficient hashing algorithm. Using the graphical signatures, PCAV can quickly detect new attacks and enables network administrators to instantly recognize and respond to the attacks. Another strength of PCAV comes from handling flows instead of packets. Per-flow visualization greatly reduces the processing time and further provides compatibility with legacy routers which export flow information such as Net- Flow in Cisco routers. We have demonstrated the effectiveness of PCAV using real attack traffics.
Beomsoo Park, Sungjin Hong, Jaewook Oh, Heejo Lee, "Defeding a Web Browser against Spying with Browser Help Object", IEEE Int'l Conf. on Intelligence and Security Informatics (IEEE ISI), Vol. 3495, pp. 638-639, May. 2005.
The proposed SAS architecture is to protect sign-in information from known and unknown malicious BHOs. We have implemented the SAS architecture on Windows systems. The reference implementation shows that the current implementation works properly for 83% of web sites among the most popular 100 web sites. Furthermore, we can increase the effective range by adapting the detection algorithm to other exceptional sites.
Ohoon Kwon, Seungmin Lee, Heejo Lee, Jong Kim, Sangcheon Kim, Gunwoo Nam, Joonggil Park, "HackSim: An Automation of Penetration Testing for Remote Buffer Overflow Vulnerabilities", Int'l Conf. on Information Networking (ICOIN), Vol. 3391, pp. 652-661, Jan. 2005.
We propose an extensible exploit framework for automation of penetration testing (or pen-testing) without loss of safety and describe possible methods for sanitizing unreliable code in each part of the framework. The proposed framework plays a key role in implementing HackSim a pen-testing tool that remotely exploits known buffer-overflow vulnerabilities. Implementing our enhanced version of HackSim for Solaris and Windows systems, we show the advantages of our sanitized pen-testing tool in terms of safety compared with existing pen-testing tools and exploit frameworks. This work is stepping toward a systematic approach for substituting difficult parts of the labor-intensive pen-testing process.
Heejo Lee, Jong Kim, "Attack Resiliency of Network Topologies", Int'l Conf. on Parallel and Distributed Computing, Applications and Technologies (PDCAT), Vol. 3320, pp. 638-641, Dec. 2004.
Network topology has no direct effect on the correctness of network protocols, however, it influences on the performance of networks and the survivability of the networks under attacks. In this paper, we examines the attack resiliency of network topologies and shows that the topological structure has direct impact on the robustness of a network under attacks.
Jin-Ho Kim, Saewoong Bahk, Heejo Lee, "A Connection Management Protocol for Stateful Inspection Firewalls in Multi-Homed Networks", IEEE Int'l Conf. on Communications (IEEE ICC), Jun. 2004.
To provide network services consistently under various network failures, enterprise networks increasingly utilize path diversity through multi-homing. As a result, multi-homed non-transit autonomous systems (ASes) become to surpass singlehomed networks in number. In this paper, we address an inevitable problem that occurs when networks with multiple entry points deploy stateful inspection firewalls in their borders. In this paper, we formulate this phenomenon into a statesharing problem among multiple firewalls under asymmetric routing condition. To solve this problem, we propose a stateful inspection protocol that requires very low processing and messaging overhead. Our protocol consists of the following two phases: 1) Generation of a TCP SYN cookie marked with the firewall identification number upon a SYN packet arrival, and 2) State sharing triggered by a SYN/ACK packet arrival in the absence of the trail of its intial SYN packet. We demonstrate that our protocol is scalable, robust, and simple enough to be deployed for high speed networks. It also transparently works under any client-server configurations. Last but not least, we present experimental results through a prototype implementation.
Kihong Park, Heejo Lee, "On the Effectiveness of Route-Based Packet Filtering for Distributed DoS Attack Prevention in Power-Law Internets", Int'l Conf. on ACM SIGCOMM, Vol. 31, No. 4, pp. 15-26, Aug. 2001.
Denial of service (DoS) attack on the Internet has become a pressing problem. In this paper, we describe and evaluate route-based distributed packet filtering (DPF), a novel approach to distributed DoS (DDoS) attack prevention. We show that DPF achieves proactiveness and scalability, and we show that there is an intimate relationship between the effectiveness of DPF at mitigating DDoS attack and power-law network topology. The salient features of this work are two-fold. First, we show that DPF is able to proactively filter out a significant fraction of spoofed packet flows and prevent attack packets from reaching their targets in the first place. The IP flows that cannot be proactively curtailed are extremely sparse so that their origin can be localized-i.e., IP traceback-to within a small, constant number of candidate sites. We show that the two proactive and reactive performance effects can be achieved by implementing route-based filtering on less than 20% of Internet autonomous system (AS) sites. Second, we show that the two complementary performance measures are dependent on the properties of the underlying AS graph. In particular, we show that the power-law structure of Internet AS topology leads to connectivity properties which are crucial in facilitating the observed performance effects.
Kihong Park, Heejo Lee, "On the Effectiveness of Probabilistic Packet Marking for IP Traceback under Denial of Service Attack", IEEE INFOCOM, Vol. 1, pp. 338-347, Apr. 2001.
Effective mitigation of denial of service (DoS) attack is a pressing problem on the Internet. In many instances, DoS attacks can be prevented if the spoofed source IP address is traced back to its origin which allows assigning penalties to the offending party or isolating the compromised hosts and domains from the rest of the network. Recently IP traceback mechanisms based on probabilistic packet marking (PPM) have been proposed for achieving traceback of DoS attacks. In this paper, we show that probabilistic packet marking-of interest due to its efficiency and implementability vis-`a-vis deterministic packet marking and logging or messaging based schemes-suffers under spoofing of the marking field in the IP header by the attacker which can impede traceback by the victim. We show that there is a trade-off between the ability of the victim to localize the attacker and the severity of the DoS attack, which is represented as a function of the marking probability, path length, and traffic volume. The optimal decision problem-the victim can choose the marking probability whereas the attacker can choose the spoofed marking value, source address, and attack volume-can be expressed as a constrained minimax optimization problem, where the victim chooses the marking probability such that the number of forgeable attack paths is minimized. We show that the attacker's ability to hide his location is curtailed by increasing the marking probability, however, the latter is upper-bounded due to sampling constraints. In typical IP internets, the attacker's address can be localized to within 2-5 equally likely sites which renders PPM effective against single source attacks. Under distributed DoS attacks, the uncertainty achievable by the attacker can be amplified, which diminishes the effectiveness of PPM.
Heejo Lee, Jong Kim, Sungje Hong, Sunggu Lee, "Task Scheduling using a Block Dependency DAG for Block-Oriented Sparse Cholesky Factorization", ACM SIGAPP Symposium on Applied Computing (ACM SAC), Vol. 2, pp. 641-648, Mar. 19. 2000.
Block-oriented sparse Cholesky factorization decomposes a sparse matrix into rectangular sub-block; each block can then be handled as a computational unit in order to increase data reuse in a hierarchical memory system. Also, the factorization method increases the degree of concurrency with the reduction of communication volumes so that it performs more efficiently on a distributed-memory multiprocessor system than the customary column-oriented factorization method. But until now, mapping of blocks to processors has been designed for load balance with restricted communication patterns. In this paper, we represent tasks using a block dependency DAG that shows the execution behavior of block sparse Cholesky factorization are different from those of conventional parallel task model, we propose a new task scheduling algorithm using a block dependency DAG. The proposed algorithm consists of two stages: early-atart clustering, and affined cluster mapping. The early-start clustering stage is used to cluster tasks with preserving the earliest start time of a task without limiting parallelism. After task clustering, the affined cluster mapping stage allocates clusters to processors considering both communication cost and load balance. Experimental results on the Fujitsu approach outperforms other processor mapping methods.
Heejo Lee, Jong Kim, Sungje Hong, "Evaluation of Matrix Chain Products on Parallel Systems", IASTED Int'l Conf. on Parallel and Distributed Computing and Systems (PDCS), pp. 124-129, Oct. 1997.
The problem of finding an optimal product sequence for sequential multiplication of matrices (the matrix chain ordering problem, MCOP) is wellknown and has been studied for a long time. In this paper, we consider the problem of finding an optimal product schedule for evaluating a chain of matrix products on a parallel computer (the matrix chain scheduling problem, MCSP). The difference between MCSP and MCOP is that MCOP considers a product sequence for single processor systems and MCSP considers a sequence of concurrent matrix products for parallel systems. The approach of parallelizing each matrix product after finding an optimal product sequence for single processor systems does not always guarantee a minimal evaluation time since each parallelized matrix product may use processors inefficiently. We introduce a processor scheduling algorithm for MCSP which attempts to minimize the evaluation time of a chain of matrix products on a parallel computer, even at the expense of a slight increase in the total number of operations. Experiments on Fujitsu AP multicomputer show that the proposed algorithm decreases the time required to evaluate a chain of matrix products in actual parallel systems.
Heejo Lee, John Milburn, Jong Kim, "The Witch Hunt", SANS Network Security, Nov. 1996.
This report details the story of tracking a cracker who broke into a number of computer systems at the Pohang University of Science and Technology (POSTECH) and at Ewha Womens University. The cracker is someone who had been given a position of trust and responsibility within the Korean Internet community. The attack methods include exploitation of a ypupdated bug and sophisticated IP spoofing, similar to what Mitnick used in the Shimomura attack. This is the first Internet cracking incident in Korea which will result in imprisonment.
Heejo Lee, Jong Kim, Sungje Hong, "Processor Allocation for Matrix Products", Parallel Computing Workshop (PCW), pp. S-E-1--S-E-7, Nov. 1996.
In this paper, we present the problem of allocating processors for matrix products. First, we consider how many processors should be allocated for computing one matrix product on a parallel system. Then, we discuss how to allocate processors for a number of independent matrix products on the parallel system. In many cases, it is shown that the performance of parallel algorithms does not improve beyond a certain point even though more processors are allocated for more parallelism. The results from experiments on the Fujitsu AP parallel system for a matrix product show that allocating more processors is not always beneficial for overall system performance. Also, when evaluating a number of independent matrix products, the concurrent execution of multiple matrix products by partitioning the system is better than the independent evaluation of matrix products sequentially by parallelizing each matrix product. Finally we conclude that such kind of result can be applicable to many processor allocation problems on a parallel system such as the processor allocation problem for evaluating a chain of matrix products.