IWPD

The 5th IEEE International Workshop on Program Debugging

Hotel RC - Normanna Room

IWPD Welcome and Opening

Monday, Nov. 3, 09:15 - 09:30

Sudipto Ghosh, J. Jenny Li. Welcome Message

 

IWPD #1: Event Set and Trace Reduction

Monday, Nov. 3, 09:30 - 10:30

Chair: Birgit Hofer

1.     Hanefi Mercan and Cemal Yilmaz. Pinpointing Failure Inducing Event Orderings

2.     Teemu Kanstrén and Marsha Chechik. Trace Reduction and Pattern Analysis to Assist Debugging in Model-Based Testing

 

IWPD #2: Panel Discussion: Program Debugging – Research and Practice

Moderator: W. Eric Wong

Monday, Nov. 3, 11:00 - 12:30

 

IWPD #3: Debugging with Support for Reliability, Static Analysis, and Temporal Assertions

Monday, Nov. 3, 14:00 - 15:30

Chair: Eric Wong

1.     Wafa Jaffal and Jeff Tian. Defect Analysis and Reliability Assessment for Transactional Web Applications

2.     Qian Wang, Dahai Jin and Yunzhan Gong. A Memory Model Based on Three-Valued Matrix for Static Defect Detection

3.     Ziad Al-Sharif, Clinton Jeffery and Mahmoud Said. Debugging with Dynamic Temporal Assertions

 

IWPD #4: Quality and Applicability of Debugging and Fault Localization

Monday, Nov. 3, 16:00 - 18:00

Chair: Ziad Al Sharif

1.     Birgit Hofer. Spectrum-Based Fault Localization for Spreadsheets: Influence of Correct Output Cells on the Fault Localization Quality

2.     Benjamin Siegmund, Michael Perscheid, Marcel Taeumel and Robert Hirschfeld. Studying the Advancement in Debugging Practice of Professional Software Developers

 


Pinpointing Failure Inducing Event Orderings
A new method is designed to pinpoint failure inducing event orderings in Event Based Systems. Sequence Covering Arrays are used as test suites for systems under test which have only deterministic type of failures. After testing all test cases of SCA to get full coverage on event orderings of length 2 and 3, new test cases are generated using a new method inspired from Delta Debugging Algorithm. Suspicious set of event orderings which are most likely failures, are constructed for each failure type, and this sets are narrowed down adaptively with the result of newly generated test cases. Experiments are carried out with different fault scenarios. Our results suggest that proposed algorithm can detect all failure inducing event orderings using small number of extra test cases.

Trace Reduction and Pattern Analysis to Assist Debugging in Model-Based Testing
Model-based testing (MBT) is a technique for generating test cases from test models. One of the benefits of MBT is the ability to have a computer generate and execute extensive test sets from the test models, achieving high coverage. However, when such large test sets are automatically generated and executed, the resulting failure traces can be very large and difficult to debug for root cause analysis. In this paper, we present a technique for minimizing the length of a failure trace, creating variants of it, and for pattern mining the trace variants to assist in root cause analysis. We demonstrate the technique on a model of a GSM SIM card.

Defect Analysis and Reliability Assessment for Transactional Web Applications
In this research, we analyze defects collected from different failure sources and use it for reliability assessment. Failure is the inability of a system or component to perform its required functions within specified performance requirements. Reliability is the probability of failure-free operations, and it is one of the most important quality attribute for the users of web applications. Reliability analysis and modeling can help Web service providers to assess current reliability, predict future reliability, and quantify potential improvement to reach reliability target. The failure information and the related workload measurements are required input for various reliability models under variable work load such as modeling transactional web applications.  In this paper, we used session tracking data in addition to web server logs to characterize and measure the variable application workload. Session data provides accurate measurements of web applications’ usage time such as the number of transactions that are required for reliability analysis. In addition to web failures recorded in web logs and defect tracking system, we also considered failures from the server side application error logs to quantify unique failures that occur on the web server. We applied different reliability assessment approaches covering time domain and input domain on data from our case study application and demonstrated the applicability and effectiveness of our approach.

A Memory Model Based on Three-Valued Matrix for Static Defect Detection
The knowledge of pointer behavior is very important for static analysis tools, especially the ones aiming at detecting C and related programming languages. But due to the usage of structure and the weakly typed feature of C-like programming languages, obtaining such knowledge is not an easy work. In this paper, we present a modeling method based on 3-valued matrix. This model enables us to perform flow-sensitive points-to analysis to obtain the knowledge of pointer behavior. And specially, it relies on offsets instead of access paths to denote disjoint objects in the memory space, thereby addressing the issue mentioned just before. And moreover, the so-called 3-valued matrix is a compact representation of points-to sets. Such a representation is a key part for designating an effective flow-sensitive points-to analysis. We implement a prototype of our method, and give it an evaluation on a set of open source benchmarks. The experimental results prove the effectiveness of our method, and show that it is suitable for exploring large programs with reasonable accuracy

Debugging with Dynamic Temporal Assertions
Bugs vary in their root causes and their revealed behaviors; some may cause a crash or a core dump, while others may cause an incorrect or missing output or an unexpected behavior. Moreover, most bugs are revealed long after their actual cause. A variable might be assigned early in the execution, and that value may cause a bug far from that last assigned place. This often requires users to manually track heuristic information over different execution states. This information may include a trace of specific variables’ values and their assigned locations, functions and their returned values, and detailed execution paths. This paper introduces Dynamic Temporal Assertions (DTA) into the conventional source-level debugging session. It extends a typical gdb like source level debugger named UDB with on-the-fly temporal assertions. Each assertion is capable of: 1) validating a sequence of execution states, named temporal interval, and 2) referencing out-of-scope variables, which may not be live in the execution state at evaluation time. These new DTA assertions are not bounded by the limitations of ordinary in-code assertions such as locality, temporality, and static hardwiring into the source code. Furthermore, they advance typical interactive debugging sessions and their conditional breakpoints and watchpoints.

Spectrum-Based Fault Localization for Spreadsheets: Influence of Correct Output Cells on the Fault Localization Quality
Spreadsheets used in companies often contain severalthousand formulas. The localization of faulty cells in suchlarge spreadsheets could be time-consuming and frustrating.Spectrum-based fault localization (SFL) supports users infaster locating the faulty cell(s). However, SFL depends onthe information the user provides. In this paper, we addressthree research questions in this context: (RQ1) Do spreadsheetscontain correct output cells that positively or negativelyinfluence the ranking of the faulty cells? (RQ2) If yes, is itpossible to a-priori determine which correct output cells wouldpositively influence the ranking? (RQ3) Is it possible to avoid adecreasing fault localization quality when adding more correctoutput cells? This paper shows that there exist correct outputcells which positively or negatively influence the ranking. Inparticular, correct output cells with the largest cones positivelyinfluence the ranking of the faulty cell. Balancing the relationof correct and erroneous output cells by duplicating the conesof erroneous output cells improves the fault localization quality.

Studying the Advancement in Debugging Practice of Professional Software Developers
In 1997, Henry Lieberman stated that debugging is the dirty little secret of computer science. Since then, several promising debugging technologies have been developed such as back-in-time debuggers and automatic fault localization methods. However, the last study about the state-of-the-art in debugging is still more than 15 years old and so it is not clear whether these new approaches have been applied in practice or not.For that reason, we investigate the current state of debugging in a new comprehensive study. First, we review the available literature and learn about current approaches and study results. Second, we observe several professional developers while debug- ging and interview them about their experiences. Based on these results, we create a questionnaire that should serve as the basis for a large-scale online debugging survey later on. With these results, we expect new insights into debugging practice that help to suggest new directions for future research.