Home » Articles posted by kaiser

Author Archives: kaiser

Ignore This Page

Outreach Service

Are you an unpaid volunteer for a non-profit organization in the New York City area? Have you completed COMS W4156 Advanced Software Engineering (or equivalent) with grade B+ or higher? Does your work at the non-profit (or could your work there) leverage your software engineering skills to develop and maintain software important for the non-profit’s mission (not just their website)? Would you like to receive academic credit for your work (in 3998, 4901, or 6901)?

Contact Professor Gail Kaiser to discuss opportunities: kaiser@cs.columbia.edu.

Replay without Recording of Production Bugs for Service Oriented Applications

Presented by Jonathan Bell at 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE) in September 2018. https://doi.org/10.1145/3238147.3238186.

Software available at https://github.com/Programming-Systems-Lab/Parikshan. The software is not maintained.

Short time-to-localize and time-to-fix for production bugs is extremely important for any 24×7 service-oriented application (SOA). Debugging buggy behavior in deployed applications is hard, as it requires careful reproduction of a similar environment and workload. Prior approaches for automatically reproducing production failures do not scale to large SOA systems. Our key insight is that for many failures in SOA systems (e.g., many semantic and performance bugs), a failure can automatically be reproduced solely by relaying network packets to replicas of suspect services, an insight that we validated through a manual study of 16 real bugs across five different systems. This paper presents Parikshan, an application monitoring framework that leverages user-space virtualization and network proxy technologies to provide a sandbox “debug” environment. In this “debug” environment, developers are free to attach debuggers and analysis tools without impacting performance or correctness of the production environment. In comparison to existing monitoring solutions that can slow down production applications, Parikshan allows application monitoring at significantly lower overhead.

Binary Quilting to Generate Patched Executables without Compilation

Anthony Saieva and Gail Kaiser. Binary Quilting to Generate Patched Executables without Compilation. ACM Workshop on Forming an Ecosystem Around Software Transformation (FEAST), Virtual, November 2020, pp. 3-8. https://doi.org/10.1145/3411502.3418424

When applying patches, or dealing with legacy software, users are often reluctant to change the production executables for fear of unwanted side effects. This results in many active systems running vulnerable or buggy code even though the problems have already been identified and resolved by developers. Furthermore when dealing with old or proprietary software, users can’t view or compile source code so any attempts to change the application after distribution requires binary level manipulation. We present a new technique we call binary quilting that allows users to apply the designated minimum patch that preserves core semantics without fear of unwanted side effects introduced either by the build process or by additional code changes. Unlike hot patching, binary quilting is a one-time procedure that creates an entirely new reusable binary. Our case studies show the efficacy of this technique on real software in real patching scenarios.

@inproceedings{10.1145/3411502.3418424,
author = {Saieva, Anthony and Kaiser, Gail},
title = {{Binary Quilting to Generate Patched Executables without Compilation}},
year = {2020},
month = {November},
url = {https://doi.org/10.1145/3411502.3418424},
doi = {10.1145/3411502.3418424},
booktitle = {ACM Workshop on Forming an Ecosystem Around Software Transformation (FEAST)},
pages = {3–-8},
location = {Virtual},
}

Binary Quilting to Generate Patched Executables without Compilation

Presented by Anthony Saieva at ACM Workshop on Forming an Ecosystem Around Software Transformation (FEAST) on November 13, 2020. https://doi.org/10.1145/3411502.3418424

When applying patches, or dealing with legacy software, users are often reluctant to change the production executables for fear of unwanted side effects. This results in many active systems running vulnerable or buggy code even though the problems have already been identified and resolved by developers. Furthermore when dealing with old or proprietary software, users can’t view or compile source code so any attempts to change the application after distribution requires binary level manipulation. We present a new technique we call binary quilting that allows users to apply the designated minimum patch that preserves core semantics without fear of unwanted side effects introduced either by the build process or by additional code changes. Unlike hot patching, binary quilting is a one-time procedure that creates an entirely new reusable binary. Our case studies show the efficacy of this technique on real software in real patching scenarios.

Ad hoc Test Generation Through Binary Rewriting

Presented by Anthony Saieva at 20th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM) on September 27, 2020. https://doi.org/10.1109/SCAM51674.2020.00018

Software available at https://github.com/Programming-Systems-Lab/ATTUNE.

When a security vulnerability or other critical bug is not detected by the developers’ test suite, and is discovered post-deployment, developers must quickly devise a new test that reproduces the buggy behavior. Then the developers need to test whether their candidate patch indeed fixes the bug, without breaking other functionality, while racing to deploy before attackers pounce on exposed user installations. This can be challenging when factors in a specific user environment triggered the bug. If enabled, however, record-replay technology faithfully replays the execution in the developer environment as if the program were executing in that user environment under the same conditions as the bug manifested. This includes intermediate program states dependent on system calls, memory layout, etc. as well as any externally-visible behavior. Many modern record-replay tools integrate interactive debuggers, to help locate the root cause, but don’t help the developers test whether their patch indeed eliminates the bug under those same conditions. In particular, modern record-replay tools that reproduce intermediate program state cannot replay recordings made with one version of a program using a different version of the program where the differences affect program state. This work builds on record-replay and binary rewriting to automatically generate and run targeted tests for candidate patches significantly faster and more efficiently than traditional test suite generation techniques like symbolic execution. These tests reflect the arbitrary (ad hoc) user and system circumstances that uncovered the bug, enabling developers to check whether a patch indeed fixes that bug. The tests essentially replay recordings made with one version of a program using a different version of the program, even when the the differences impact program state, by manipulating both the binary executable and the recorded log to result in an execution consistent with what would have happened had the the patched version executed in the user environment under the same conditions where the bug manifested with the original version. Our approach also enables users to make new recordings of their own workloads with the original version of the program, and automatically generate and run the corresponding ad hoc tests on the patched version, to validate that the patch does not break functionality they rely on.

Ad hoc Test Generation Through Binary Rewriting

Anthony Saieva, Shirish Singh and Gail Kaiser. Ad hoc Test Generation Through Binary Rewriting. 20th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM), Virtual, September 2020, pp. 115-126. https://doi.org/10.1109/SCAM51674.2020.00018.

Software available at https://github.com/Programming-Systems-Lab/ATTUNE.

When a security vulnerability or other critical bug is not detected by the developers’ test suite, and is discovered post-deployment, developers must quickly devise a new test that reproduces the buggy behavior. Then the developers need to test whether their candidate patch indeed fixes the bug, without breaking other functionality, while racing to deploy before attackers pounce on exposed user installations. This can be challenging when factors in a specific user environment triggered the bug. If enabled, however, record-replay technology faithfully replays the execution in the developer environment as if the program were executing in that user environment under the same conditions as the bug manifested. This includes intermediate program states dependent on system calls, memory layout, etc. as well as any externally-visible behavior. Many modern record-replay tools integrate interactive debuggers, to help locate the root cause, but don’t help the developers test whether their patch indeed eliminates the bug under those same conditions. In particular, modern record-replay tools that reproduce intermediate program state cannot replay recordings made with one version of a program using a different version of the program where the differences affect program state. This work builds on record-replay and binary rewriting to automatically generate and run targeted tests for candidate patches significantly faster and more efficiently than traditional test suite generation techniques like symbolic execution. These tests reflect the arbitrary (ad hoc) user and system circumstances that uncovered the bug, enabling developers to check whether a patch indeed fixes that bug. The tests essentially replay recordings made with one version of a program using a different version of the program, even when the the differences impact program state, by manipulating both the binary executable and the recorded log to result in an execution consistent with what would have happened had the the patched version executed in the user environment under the same conditions where the bug manifested with the original version. Our approach also enables users to make new recordings of their own workloads with the original version of the program, and automatically generate and run the corresponding ad hoc tests on the patched version, to validate that the patch does not break functionality they rely on.

@INPROCEEDINGS{9252025,
author={Anthony {Saieva} and Shirish {Singh} and Gail {Kaiser}},
booktitle={IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)},
title={{Ad hoc Test Generation Through Binary Rewriting}},
month = {September},
year={2020},
location = {Virtual},
volume={},
number={},
pages={115–126},
url = {https://doi.org/10.1109/SCAM51674.2020.00018},
}

Replay without Recording of Production Bugs for Service Oriented Applications

Nipun Arora, Jonathan Bell, Franjo Ivančić, Gail Kaiser and Baishakhi Ray. Replay without Recording of Production Bugs for Service Oriented Applications. 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE), Montpellier, France, September 2018, pp. 452-463. https://doi.org/10.1145/3238147.3238186.

Software available at https://github.com/Programming-Systems-Lab/Parikshan. The software is not maintained.

Short time-to-localize and time-to-fix for production bugs is extremely important for any 24×7 service-oriented application (SOA). Debugging buggy behavior in deployed applications is hard, as it requires careful reproduction of a similar environment and workload. Prior approaches for automatically reproducing production failures do not scale to large SOA systems. Our key insight is that for many failures in SOA systems (e.g., many semantic and performance bugs), a failure can automatically be reproduced solely by relaying network packets to replicas of suspect services, an insight that we validated through a manual study of 16 real bugs across five different systems. This paper presents Parikshan, an application monitoring framework that leverages user-space virtualization and network proxy technologies to provide a sandbox “debug” environment. In this “debug” environment, developers are free to attach debuggers and analysis tools without impacting performance or correctness of the production environment. In comparison to existing monitoring solutions that can slow down production applications, Parikshan allows application monitoring at significantly lower overhead.

@inproceedings{Arora:2018:RWR:3238147.3238186,
 author = {Arora, Nipun and Bell, Jonathan and Ivan\v{c}i\'{c}, Franjo and Kaiser, Gail and Ray, Baishakhi},
 title = {Replay Without Recording of Production Bugs for Service Oriented Applications},
 booktitle = {Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering},
 series = {ASE 2018},
 year = {2018},
 isbn = {978-1-4503-5937-5},
 location = {Montpellier, France},
 pages = {452--463},
 numpages = {12},
 url = {http://doi.acm.org/10.1145/3238147.3238186},
 doi = {10.1145/3238147.3238186},
 acmid = {3238186},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Fault reproduction, live debugging},
} 

SEmantic Security bug detection with Pseudo-Oracles (SESPO)

Semantic security bugs cause serious vulnerabilities across a wide range of software, e.g., bypassing security checks, gaining access to sensitive information, and escalating privileges.  Automatically detecting these bugs is hard because unlike crashing bugs they may not show any obvious side-effects.  The safety and security specifications violated by semantic security bugs are rarely written in a form easy to express as test oracles that define precisely what the behavior should be for every possible input sequence.

Pseudo-oracle testing is one popular technique to identify non-crashing buggy behaviors when true test oracles are unavailable:  Two or more executions are compared, as oracles for each other, to find discrepancies.  The most common pseudo-oracle approach used in security research is differential testing, where multiple independent implementations of the same functionality are compared to see if they agree on, e.g., whether a given untrusted input is valid.   But that only works when multiple independent implementations are available to the developers.

An alternative approach that requires only the one implementation at hand is metamorphic testing, which checks domain-specific metamorphic relations that are defined across sets of input/output pairs from multiple executions of the same test program. For example, a sample metamorphic property for program p adding two inputs a and b can be p(a, b) = p(a, 0) + p(b, 0).  For a program r that sorts an array of inputs, r( [original array] ) should produce the same sorted result as r( [permutation of original array] ).  A machine learning classifier should also produce the same results when the order of the training set is shuffled.

However, it is challenging to apply metamorphic relations to detecting security vulnerabilities, as security properties require richer semantics and often cannot be expressed as simple relationships among input-output pairs.  This project investigates whether there is a comprehensive range of semantically expressive metamorphic relations that can successfully detect semantic security vulnerabilities.

We are seeking undergraduate and MS project students to participate in a pilot study: 1. Create a vulnerability database of known semantic security bugs, their corresponding patches, and the test cases needed to reproduce those bugs.  2. Manually classify the bugs into different sub-categories, e.g., incorrect error handling, missing input validation, API abuse, etc.  3. Determine which properties change across the buggy and fixed versions that can be expressed as violations of metamorphic relations.  4. Inspect other execution paths to check whether these relations are satisfied during benign executions.  5. Find out whether these metamorphic relations are also applicable to similar bugs in other programs.  Students should be interested in security, software testing and program analysis.

Contact: Professor Gail Kaiser, kaiser@cs.columbia.edu. Please put “sespo” in the subject of your email.

Preponderance of the Evidence for SimilAr BEhavioR (SABER)

This project investigates dynamic analysis approaches to identifying behavioral similarities among code elements in the same or different programs, particularly for code that behaves similarly during execution but does not look similar so would be difficult or impossible to detect using static analysis (code clones). While code clone technology is fairly mature, detecting behavioral similarities is challenging.

Dynamic analysis involves executing the candidate programs, and portions of programs such as classes and methods, which means we need to identify program and method inputs that will be useful for identifying code that expert humans would deem similar — without numerous false positives.  For instance, many methods given null or zero inputs produce null or zero outputs and/or execute similar instruction sequences, even though they behave entirely differently with other inputs.  Some methods might coincidentally happen to behave similarity for one or a small number of more complex inputs, but still behave quite differently on the majority of the input space.

We plan to adapt the “preponderance of the evidence” metaphor as to whether a given piece of code is sufficiently similar to the code at hand.  This requires constructing test suites (collections of inputs) suitable for weighing similarities and differences, and devising machine learning or analytical approaches to classifying the corresponding collections of executions as similar or different.  We will seek to combine metrics from both functional input/output vectors and instruction trace (e.g., data dependency graph) representations of executions.

Our initial use case is program understanding:  Most software is developed by teams of engineers, not a single individual working alone. A software product is rarely developed once and for all with no bug fixes or features added after original deployment. Original team members leave the company or the project and new members join. Thus most engineers need to understand code originally written by other persons, indeed studies have found that engineers spend more of their time trying to understand unfamiliar code than creating or modifying code. The engineer might not know what a piece of code is supposed to do, how it works, how it is supposed to work, how to fix it to make it work; documentation might be poor, outdated, or non-existent, and knowledgeable co-workers are not always available.  We hypothesize that showing developers code that behaves similarly to the code at hand may be helpful, particularly if the similar code is better documented, better structured or simply easier to understand.

We are seeking undergraduate and MS project students interested in program understanding, program analysis, compilers and language runtime environments, software testing and machine learning.  Possibly also security since we may apply to behavioral similarities to vulnerability detection and identifying malware.

Contact: Professor Gail Kaiser, kaiser@cs.columbia.edu. Please put “saber” in the subject of your email.