Home » Projects » Active Projects

Category Archives: Active Projects

Gameful Computational Thinking

Inspired by CS for All?  The Programming Systems Lab, led by Professor Gail Kaiser, is building a game-based learning and assessment system that infuses computational thinking in grade 6-8 curricula.  For the Fall 2017 semester, we’re recruiting students interested in collaborating as a Scrum team to complete an MVP expected to include several features:
  • A gameful direct instruction system that embeds Parson’s Programming Puzzles in Scratch
  • A gameful constructionism mode that integrates scoring systems for play and automated assessment through Blockly
  • A gameful intelligent tutor that provides hints on-demand and just-in-time via clustering and sequential pattern mining algorithms
  • A gameful affinity space that enables curricula assimilation with quest management and learning metrics using Node.js
Our projects with students from previous semesters have resulted in the development of a rich architecture and we anticipate a productive period ahead with opportunities for co-authorship of publications that document results from experiments with grade 6-8 students.  Participants will register for 3 credits of COMS E6901, section 014.  CVN students are welcome.  To learn more, please contact Jeff Bender, jeffrey.bender@columbia.edu.
SAGE | Social Addictive Gameful Engineering

Toward Trustworthy Mutable Replay for Security Patches

Society is increasingly reliant on software, but deployed software contains security vulnerabilities and other bugs that can threaten privacy, property and even human lives. When a security vulnerability or other severe defect is discovered, a software patch is issued to attempt to fix the problem – but patches themselves can be incorrect, inadequate, and break mission-critical functionality. This project investigates the full workflow for the developer to rapidly diagnose the root cause of the bug, for the developer to test that a prospective patch indeed completely removes the bug without introducing new errors, and for user organizations to check the issued patch on their own configurations and production workloads before adopting the patch.

This project explores “mutable replay” technology to help reproduce, diagnose, and fix software bugs. A low-overhead recorder embedded in the application records the execution of software in the user environment in case a failure or exploit occurs, allowing the developer to later replay the recorded log – with exactly the same version of the code – to reproduce the problem. Such deterministic record/replay technology is reasonably well-understood. Mutable replay extends record/replay to enable logs recorded with the buggy version to be replayed after the modest code changes typical of critical patches, to show that patches work correctly – or, perhaps more significantly, do not work correctly and further debugging is needed.

We plan to leverage semantic information readily available to the developer, e.g., from the version repository, to conduct well-understood static and dynamic analyses to inform transformations to the recorded log, to reuse the previously recorded responses from interface calls when they match the semantics of the modified code and “go live” to obtain new inputs when they don’t. For example, the recorded log simply ends when a crash occurred during the original recording but “go live” enables the application to continue running beyond the end of the log if the code changes removed the cause of the crash.  In the case where an exploit was injected during the original recording, the modified code that blocks the exploit would “go live” temporarily during the part of the execution where the exploit occurred, but the application may be able to continue execution thereafter using the recorded log for the data that was not tainted by the exploit.

This research involves many interesting problems in program analysis, software testing and debugging, program understanding, and software similarity analysis. The results of this research will benefit society and individuals by simplifying and hastening both generation and validation of patches, ultimately making software more reliable and secure.

Transparent Mutable Replay for Multicore Debugging and Patch Validation describes a proof-of-concept implementation at the Linux operating system level developed several years ago in Prof. Nieh’s lab, which used a simple minimal edit distance metric to guide trial-and-error mutation of the recorded log during replay with a modified version of the code. This works very well in some cases, but cannot handle many common code changes.  We now seek to develop a new prototype interfacing instead at the Java Virtual Machine level, to leverage the higher level semantics available, and guiding replay mutation using static analyses of the modified source code and dynamic analyses of the modified byte code execution.  We also plan to enhance the Linux implementation with analogous analyses.

This is a large effort with numerous subparts, expected to progress over the next three or four years. We are seeking new students at all levels: PhD, MS and undergraduate.

Prospective new PhD students should have, or be able to quickly acquire, deep understanding of the JVM and/or Linux kernel, record/replay technology, and static and dynamic program analyses such as program slicing and taint tracking.

For new undergraduate and MS project students, we prefer students who would like to participate for two or more consecutive semesters. This project is most suited for students who have completed both 4115 and 4156 (or equivalents), or are taking concurrently. A team of collaborating students would be ideal, but individual projects are also possible.

Contact Professor Gail Kaiser (kaiser@cs.columbia.edu)

Dynamic Code Similarity

“Code clones” are statically similar code fragments dispersed via copy/paste or independently writing lookalike code; best practice removes clones (refactoring) or tracks them (e.g., to ensure bugs fixed in one clone are also fixed in others). We instead study dynamically similar code, for two different similarity models.

One model is functional similarity, finding code fragments that exhibit similar input/output behavior. Other researchers reported success with C but failure for object-oriented languages, presenting the many challenges to executing OO code fragments in isolation. We side-stepped the isolation problem by adapting in-vivo testing, previously developed by our lab for another purpose, where unit test cases are run in the application states constructed during full system test cases.

Our other dynamic similarity model is the novel notion of behavioral similarity: two code fragments are deemed similar if matching subgraphs can be found in the dynamic data dependency graphs representing the instruction-level execution traces. We developed a fast subgraph isomorphism algorithm to recognize and cluster these execution-level similarities, using PageRank to prioritize the “most important” instructions on which to pivot subgraph comparisons.

Our empirical studies show that our tools find most of the same “similar” code as the best static code clone detectors but also find many others they can’t, because the code looks very different even though functionally and/or behaviorally similar (dynamic detection will not necessarily find all static code clones because lookalike code involving polymorphism need not exhibit the same function/behavior).

We are investigating various applications of statically and dynamically similar code detection, including code search, program understanding, detecting malware, and re-engineering legacy software to use modern APIs.

All project student slots have already been filled for Fall 2016, but there may be openings for Spring 2017.

Contact Mike Su (mikefhsu@cs.columbia.edu)

Team Members

Faculty
Gail Kaiser
Simha Sethumadhavan
Tony Jebara

Graduate Students
Fang-Hsiang (“Mike”) Su
Apoorv Patwardhan

Former Graduate Students
Jonathan Bell
Kenny Harvey

Links

Publications

Fang-Hsiang Su, Jonathan Bell, Kenneth Harvey, Simha Sethumadhavan, Gail Kaiser and Tony Jebara. Code Relatives: Detecting Similarly Behaving Software. 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE), November 2016. Artifact accepted as platinum. (To appear, earlier technical report available.)

Fang-Hsiang Su, Jonathan Bell, Gail Kaiser and Simha Sethumadhavan. Identifying Functionally Similar Code in Complex Codebases. 24th IEEE International Conference on Program Comprehension (ICPC), May 2016, pp. 1-10. (ACM SIGSOFT Distinguished Paper Award)

Fang-Hsiang Su, Jonathan Bell, and Gail Kaiser. Challenges in Behavioral Code Clone Detection (Position Paper). 10th International Workshop on Software Clones (IWSC), affiliated with IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER), March 2016, volume 3, pp. 21-22. (People’s Choice Award for Best Position Paper)

Software

Download DyCLink from github.

Download HitoshiIO from github.

Download Code Similarity Experiments toolkit from github.