Home » Articles posted by kaiser

Author Archives: kaiser

Identifying Functionally Similar Code in Complex Codebases

Presented by Mike Su at 24th IEEE International Conference on Program Comprehension (ICPC), May 2016.
ACM SIGSOFT Distinguished Paper Award

Challenges in Behavioral Code Clone Detection

Presented by Mike Su at 10th International Workshop on Software Clones (IWSC), March 2016.
People’s Choice Award for Best Position Paper

Gameful Computational Thinking

Inspired by CS for All?  Eager to contribute?  The Programming Systems Lab, led by Professor Gail Kaiser, is building a collaborative game-based learning and assessment system that infuses computational thinking in grade 6-8 curricula.  Near-term projects involve:
  • Tooling Scratch with additional game design features
  • Expanding a visual assessment language and authoring environment based in Blockly
  • Enhancing an assessment server coded in Go and increasingly leveraging Node.js
  • Developing automated assessment plug-ins in any language capable of exposing HTTP endpoints
  • Visualizing formative feedback in an online dashboard with Bootstrap, AngularJS, and Node.js
  • Building a web-based affinity space to enable the crowdsourcing of game and assessment libraries
  • Architecting infrastructure to support student modeling and knowledge tracing
  • Designing experiments for system evaluation in after-school and classroom environments
Alternative project proposals and CVN students are welcome.  Participants will register for 3 credits of COMS E6901, section 014.  To learn more, please contact Jeff Bender, jeffrey.bender@columbia.edu.
SAGE | Social Addictive Gameful Engineering

Code Relatives: Detecting Similarly Behaving Software


@inproceedings{Su:2016:CRD:2950290.2950321,
author = {Su, Fang-Hsiang and Bell, Jonathan and Harvey, Kenneth and Sethumadhavan, Simha and Kaiser, Gail and Jebara, Tony},
title = “{Code Relatives: Detecting Similarly Behaving Software}”,
booktitle = “{24th ACM SIGSOFT International Symposium on Foundations of Software Engineering}”,
series = {FSE 2016},
year = {2016},
isbn = {978-1-4503-4218-6},
location = {Seattle, WA, USA},
pages = {702–714},
numpages = {13},
url = {http://doi.acm.org/10.1145/2950290.2950321},
doi = {10.1145/2950290.2950321},
acmid = {2950321},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Code relatives, code clones, link analysis, runtime behavior, subgraph matching},
note = “Artifact accepted as platinum.”
}

Record/Replay Bug Reproduction for Java

There will inevitably continue to be bugs that are not detected by any testing approach, but eventually impact users who then file bug reports. Reproducing field failures in the development environment can be difficult, however, especially in the case of software that behaves non-deterministically, relies on remote resources, or has complex reproduction steps (the users may not even know what led up to triggering the flaw, particularly in the case of software interacting with external devices, databases, etc. in addition to human users). So a record/replay approach is used to capture the state of the system just before a bug is encountered, so the steps leading up to this state can be replayed later in the lab. The naive approach of constant logging in anticipation of a defect tends to produce unacceptably high overheads (reaching 2,000+ %) in the deployed application. Novel solutions that lower this overhead typically limit the depth of information recorded (e.g., to use only a stack trace, rather than a complete state history) or the breadth of information recorded (e.g., to only log information during execution of a particular subsystem that a developer identifies as potentially buggy). But limiting the depth of information gathered may fail to reproduce an error if the defect does not present itself immediately and limiting logging to a specific subcomponent of an application makes it only possible to reproduce the bug if it occurred within that subcomponent.

Our new technique, called “Chronicler”, instead captures program execution in a manner that allows for deterministic replay in the lab with very low overhead. The key insight is to log sources of non-determinism only at the library level – allowing for a lightweight recording process while still supporting a complete replay for debugging purposes (programs with no sources of non-determinism, e.g., no user interactions, are trivial to replay – just provide the same inputs). When a failure occurs, Chronicler automatically generates a test case that consists of the inputs (e.g., file or network I/O, user inputs, random numbers, etc.) that caused the system to fail. This general approach can be applied to any “managed” language that runs in a language virtual machine (for instance, JVM or Microsoft’s .NET CLR), requiring no modifications to the interpreter or environment, and thus addresses a different class of programs than related work for non-managed languages like C and C++.

We expect to extend and use this tool as part of the Mutable Replay project, and are seeking new project students in tandem with that effort.

Contact Professor Gail Kaiser (kaiser@cs.columbia.edu)

Team Members

Faculty
Gail Kaiser

Former Graduate Students
Jonathan Bell
Nikhil Sarda

Links

Publications

Jonathan Bell, Nikhil Sarda and Gail Kaiser. Chronicler: Lightweight Recording to Reproduce Field Failures. 35th International Conference on Software Engineering, May 2013, pp. 362-371. See teaser video at https://www.youtube.com/watch?v=4IYGfdDnAJg.

Software

Download ChroniclerJ.

Toward Trustworthy Mutable Replay for Security Patches

Society is increasingly reliant on software, but deployed software contains security vulnerabilities and other bugs that can threaten privacy, property and even human lives. When a security vulnerability or other severe defect is discovered, a software patch is issued to attempt to fix the problem – but patches themselves can be incorrect, inadequate, and break mission-critical functionality. This project investigates the full workflow for the developer to rapidly diagnose the root cause of the bug, for the developer to test that a prospective patch indeed completely removes the bug without introducing new errors, and for user organizations to check the issued patch on their own configurations and production workloads before adopting the patch.

This project explores “mutable replay” technology to help reproduce, diagnose, and fix software bugs. A low-overhead recorder embedded in the application records the execution of software in the user environment in case a failure or exploit occurs, allowing the developer to later replay the recorded log – with exactly the same version of the code – to reproduce the problem. Such deterministic record/replay technology is reasonably well-understood. Mutable replay extends record/replay to enable logs recorded with the buggy version to be replayed after the modest code changes typical of critical patches, to show that patches work correctly – or, perhaps more significantly, do not work correctly and further debugging is needed.

We plan to leverage semantic information readily available to the developer, e.g., from the version repository, to conduct well-understood static and dynamic analyses to inform transformations to the recorded log, to reuse the previously recorded responses from interface calls when they match the semantics of the modified code and “go live” to obtain new inputs when they don’t. For example, the recorded log simply ends when a crash occurred during the original recording but “go live” enables the application to continue running beyond the end of the log if the code changes removed the cause of the crash.  In the case where an exploit was injected during the original recording, the modified code that blocks the exploit would “go live” temporarily during the part of the execution where the exploit occurred, but the application may be able to continue execution thereafter using the recorded log for the data that was not tainted by the exploit.

This research involves many interesting problems in program analysis, software testing and debugging, program understanding, and software similarity analysis. The results of this research will benefit society and individuals by simplifying and hastening both generation and validation of patches, ultimately making software more reliable and secure.

Transparent Mutable Replay for Multicore Debugging and Patch Validation describes a proof-of-concept implementation at the Linux operating system level developed several years ago in Prof. Nieh’s lab, which used a simple minimal edit distance metric to guide trial-and-error mutation of the recorded log during replay with a modified version of the code. This works very well in some cases, but cannot handle many common code changes.  We now seek to develop a new prototype interfacing instead at the Java Virtual Machine level, to leverage the higher level semantics available, and guiding replay mutation using static analyses of the modified source code and dynamic analyses of the modified byte code execution.  We also plan to enhance the Linux implementation with analogous analyses.

This is a large effort with numerous subparts, expected to progress over the next three or four years. We are seeking new students at all levels: PhD, MS and undergraduate.

Prospective new PhD students should have, or be able to quickly acquire, deep understanding of the JVM and/or Linux kernel, record/replay technology, and static and dynamic program analyses such as program slicing and taint tracking.

For new undergraduate and MS project students, we prefer students who would like to participate for two or more consecutive semesters. This project is most suited for students who have completed both 4115 and 4156 (or equivalents), or are taking concurrently. A team of collaborating students would be ideal, but individual projects are also possible.

Contact Professor Gail Kaiser (kaiser@cs.columbia.edu)

Identifying Functionally Similar Code in Complex Codebases

@inproceedings{hitoshiio,
author = “Fang-Hsiang Su and Jonathan Bell and Gail Kaiser and Simha Sethumadhavan”,
title = “(Identifying Functionally Similar Code in Complex Codebases}”,
booktitle = “{24th IEEE International Conference on Program Comprehension (ICPC)}”,
month = “May”,
year = “2016”,
pages = “1–10”,
url = “http://dx.doi.org/10.1109/ICPC.2016.7503720”,
note = “ACM SIGSOFT Distinguished Paper Award”
}

Challenges in Behavioral Code Clone Detection

@inproceedings{CodeRelatives.position,
author = “Fang-Hsiang Su and Jonathan Bell and Gail Kaiser”,
title = “{Challenges in Behavioral Code Clone Detection (Position Paper)}”,
booktitle = “{10th International Workshop on Software Clones (IWSC), affiliated with IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER)}”,
month = “March”,
year = “2016”,
volume = “3”,
pages = “21–22”,
url = “http://dx.doi.org/10.1109/SANER.2016.75”,
note = “People’s Choice Award for Best Position Paper.”
}

Efficient Dependency Detection for Safe Java Test Acceleration

Presented by Jon Bell at 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), September 2015.

Efficient Dependency Detection for Safe Java Test Acceleration

@inproceedings{Bell:2015:EDD:2786805.2786823,
author = {Bell, Jonathan and Kaiser, Gail and Melski, Eric and Dattatreya, Mohan},
title = “{Efficient Dependency Detection for Safe Java Test Acceleration}”,
booktitle = “{2015 10th Joint Meeting on Foundations of Software Engineering}”,
series = {ESEC/FSE 2015},
year = {2015},
isbn = {978-1-4503-3675-8},
location = {Bergamo, Italy},
pages = {770–781},
numpages = {12},
url = {http://doi.acm.org/10.1145/2786805.2786823},
doi = {10.1145/2786805.2786823},
acmid = {2786823},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Test dependence, detection algorithms, empirical studies},
}