The Java libraries JCA and JSSE offer cryptographic APIs to facilitate secure coding. When developers misuse some of the APIs, their code becomes vulnerable to cyber-attacks. To eliminate such vulnerabilities, people built tools to detect security-API misuses via pattern matching. However, most tools do not (1) fix misuses or (2) allow users to extend tools’ pattern sets. To overcome both limitations, we created Seader—an example-based approach to detect and repair security-API misuses. Given an exemplar ⟨insecure, secure⟩ code pair, Seader compares the snippets to infer any API-misuse template and corresponding fixing edit. Based on the inferred info, given a program, Seader performs inter-procedural static analysis to search for security-API misuses and to propose customized fixes.
For evaluation, we applied Seader to 28 ⟨insecure, secure⟩ code pairs; Seader successfully inferred 21 unique API-misuse templates and related fixes. With these ⟨vulnerability, fix⟩ patterns, we applied Seader to a program benchmark that has 86 known vulnerabilities. Seader detected vulnerabilities with 95% precision, 72% recall, and 82% F-score. We also applied Seader to 100 open-source projects and manually checked 77 suggested repairs; 76 of the repairs were correct. Seader can help developers correctly use security APIs.
We discuss the needs and challenges of deployable security research by sharing our experience designing CryptoGuard, a high-precision tool for detecting cryptographic application programming interface misuses. Our project has produced multiple benchmarks as well as measurement results on state-of-the-art solutions.
Spring Security has been popularly used by practitioners for its ease of use to secure enterprise applications. In this paper, we study the application framework misconfiguration vulnerabilities in the light of Spring Security, which is relatively understudied in the existing literature. Towards that goal, we identified six types of security anti-patterns and four insecure vulnerable defaults by conducting a measurement-based approach on 28 spring applications. Our analysis shows that the identified security anti-patterns and insecure defaults can leave enterprise applications vulnerable to a wide range of high-risk attacks. To prevent these high-risk attacks, we also provide recommendations for practitioners. So far, our study has contributed one update to the official Spring security documentation; the other security issues identified in this study are being considered for future major releases by the Spring Security community.
Most existing Android malware detection and categorization techniques are static approaches, which suffer from evasion attacks such as obfuscation. By analyzing program behaviors, dynamic approaches are potentially more resilient against these attacks. Yet existing dynamic approaches mostly characterize system calls, which are subject to system-call obfuscation. This paper presents DroidCat, a novel dynamic app classification technique, to complement existing approaches. By using a diverse set of dynamic features based on method calls and inter-component communications (ICC) Intents, DroidCat achieves better robustness than static approaches as well as the dynamic approaches relying on system calls.
The features were distilled from a behavioral characterization study of benign and malicious apps. Through three comprehensive empirical studies with 34,343 apps, we demonstrated that DroidCat stably achieved high classification performance and outperformed two state-of-the-art peer techniques. Overall, DroidCat achieved 97% F1-measurement accuracy for classifying apps. When detecting and categorizing malware, DroidCat obtained 16%--27% higher accuracy than the two baseline techniques. We also investigated the effects of different design choices on DroidCat’s effectiveness. We found that the features that represent distributions of method calls to user-defined APIs and library APIs are more important than other features.
We were curious whether insecure coding suggestions popularly exist on SO; if so, whether developers can rely on the community's dynamics to choose secure suggestions over insecure ones. Therefore, we conducted a second empirical study. We crawled SO answer posts with code suggestions, and then leveraged Java Baker to extract any security-related implementation. We further applied clone detection to the extracted code data for sampling. Next, we manually inspected the sampled data to decide whether each snippet is implemented in a secure or insecure way. We made our decisions based on the security API misuse patterns revealed by other researchers. We observed the following alarming phenomena:
We conducted an empirical study on StackOverflow post to understand developers’ concerns on Java secure coding, their programming obstacles, and insecure coding practices. We crawled security-related discussion threads based on keywords "Java" and "security", and manually inspected 503 discussion threads. Our study revealed the following interesting findings: